* added transcode configs for nvenc,qsv and vaapi
* updated dev docker compose
* added software fallback
* working vaapi
* minor fixes and added tests
* updated api
* compile libvips
* move hwaccel settings to `hwaccel.yml`
* changed default dockerfile, moved `readdir` call
* removed unused import
* minor cleanup
* fix for arm build
* added documentation, minor fixes
* added intel driver
* updated docs
styling
* uppercase codec and api names
* formatting
* added tests
* updated docs
* removed semicolons
* added link to `hwaccel.yml`
* added newlines
* added `hwaccel` section to docker-compose.prod.yml
* ensure mesa drivers are installed
* switch to mimalloc for sharp
* moved build version and sha256 to json
* let libmfx set the render device
* possible fix for vp9 on qsv
* updated tests
* formatting
* review suggestions
* semicolon
* moved `LD_PRELOAD` to start script
* switched to jellyfin's ffmpeg package
* fixed dockerfile
* use cqp instead of icq for qsv vp9
* updated dockerfile
* added sha256sum for other platforms
* fixtures
* refactor: add/remove album assets
* chore: open api
* feat: remove owned assets from album
* refactor: move to bulk id req/res dto
* chore: open api
* chore: merge main
* dev: mobile work
* fix: adding asset from web not sync with mobile
* remove print statement
---------
Co-authored-by: Alex Tran <Alex.Tran@conductix.com>
* fix: hide faces
* remove unused variable
* fix: work even if one fails
* better style for hidden people
* add hide face in the menu dropdown
* add buttons to toggle visibility for all faces
* add server test
* close modal with escape key
* fix: explore page
* improve show & hide faces modal
* keep name on people card
* simplify layout
* sticky app bar in show-hide page
* fix format
---------
Co-authored-by: Alex Tran <alex.tran1502@gmail.com>
* feat(server): Google Pixel motion photos
Add support for motion photos taken on Pixel phones. They have the exif
property 'MotionPhoto' set to 1, and an embedded mp4 file appended to
the JPEG file.
The implementation works like this:
- on metadata extraction, if a live photo is detected, examine the
metadata to determine where in the file the embedded MP4 is.
- extract this MP4 and write it next to the JPEG.
- link it using the existing mechanism for live photos.
There is a "MotionPhotoPresentationTimestampUs" exif property, which we
don't do anything with - I imagine that it refers to the timepoint in
the video that the photo was taken at, but it probably warrants more
investigation.
* fix format
* fix test
---------
Co-authored-by: Alex Tran <alex.tran1502@gmail.com>
* add profile-image-cropper component
* add dom-to-image library
* add store to update user profile picture when set
* dom-to-image
* remove console.logs, add svelte binding
* fix format, unused vars
* change caching of profile image
* set hash after profile image change
* remove unnecessary store
* remove unecesarry changes
* set types/dom-to-image as devDependency
* remove unecessary type declarations
use handleError
* remove error notification
which is already handled by handleError
* Revert "set types/dom-to-image as devDependency"
This reverts commit ca8b3ed1bb.
* add types do dev dependencies
* use on:close instead of on:close={()=>...}
* add newline
* sort imports
* bind photo-viewer imgElement directly, not working
* remove console.log, fix binding
* make imgElement optional
* fix element as optional prop
* fix type
* check for transparency
* small changes
* fix img.decode
* add bg, remove publicsharedkey
* fix omit publicSharedKey
---------
Co-authored-by: Alex Tran <alex.tran1502@gmail.com>
* Fix: enable transcoding of audioless videos
* Fix: enable transcoding of audioless videos & typing
* Fix: enable transcoding of audioless videos & formatting
* fix: do not always transcode if there is no audio stream