mirror of
https://github.com/immich-app/immich.git
synced 2024-12-29 11:24:37 +02:00
parent
1ea55d642e
commit
ad915ccd64
@ -33,7 +33,7 @@ You do not need to redo any transcoding jobs after enabling hardware acceleratio
|
|||||||
#### NVENC
|
#### NVENC
|
||||||
|
|
||||||
- You must have the official NVIDIA driver installed on the server.
|
- You must have the official NVIDIA driver installed on the server.
|
||||||
- On Linux (except for WSL2), you also need to have [NVIDIA Container Runtime][nvcr] installed.
|
- On Linux (except for WSL2), you also need to have [NVIDIA Container Toolkit][nvct] installed.
|
||||||
|
|
||||||
#### QSV
|
#### QSV
|
||||||
|
|
||||||
@ -122,7 +122,7 @@ Once this is done, you can continue to step 3 of "Basic Setup".
|
|||||||
- While you can use VAAPI with NVIDIA and Intel devices, prefer the more specific APIs since they're more optimized for their respective devices
|
- While you can use VAAPI with NVIDIA and Intel devices, prefer the more specific APIs since they're more optimized for their respective devices
|
||||||
|
|
||||||
[hw-file]: https://github.com/immich-app/immich/releases/latest/download/hwaccel.transcoding.yml
|
[hw-file]: https://github.com/immich-app/immich/releases/latest/download/hwaccel.transcoding.yml
|
||||||
[nvcr]: https://github.com/NVIDIA/nvidia-container-runtime/
|
[nvct]: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
|
||||||
[jellyfin-lp]: https://jellyfin.org/docs/general/administration/hardware-acceleration/intel/#configure-and-verify-lp-mode-on-linux
|
[jellyfin-lp]: https://jellyfin.org/docs/general/administration/hardware-acceleration/intel/#configure-and-verify-lp-mode-on-linux
|
||||||
[jellyfin-kernel-bug]: https://jellyfin.org/docs/general/administration/hardware-acceleration/intel/#known-issues-and-limitations
|
[jellyfin-kernel-bug]: https://jellyfin.org/docs/general/administration/hardware-acceleration/intel/#known-issues-and-limitations
|
||||||
[libmali-rockchip]: https://github.com/tsukumijima/libmali-rockchip/releases
|
[libmali-rockchip]: https://github.com/tsukumijima/libmali-rockchip/releases
|
||||||
|
@ -38,7 +38,7 @@ You do not need to redo any machine learning jobs after enabling hardware accele
|
|||||||
- The GPU must have compute capability 5.2 or greater.
|
- The GPU must have compute capability 5.2 or greater.
|
||||||
- The server must have the official NVIDIA driver installed.
|
- The server must have the official NVIDIA driver installed.
|
||||||
- The installed driver must be >= 535 (it must support CUDA 12.2).
|
- The installed driver must be >= 535 (it must support CUDA 12.2).
|
||||||
- On Linux (except for WSL2), you also need to have [NVIDIA Container Runtime][nvcr] installed.
|
- On Linux (except for WSL2), you also need to have [NVIDIA Container Toolkit][nvct] installed.
|
||||||
|
|
||||||
#### OpenVINO
|
#### OpenVINO
|
||||||
|
|
||||||
@ -99,7 +99,7 @@ You can confirm the device is being recognized and used by checking its utilizat
|
|||||||
:::
|
:::
|
||||||
|
|
||||||
[hw-file]: https://github.com/immich-app/immich/releases/latest/download/hwaccel.ml.yml
|
[hw-file]: https://github.com/immich-app/immich/releases/latest/download/hwaccel.ml.yml
|
||||||
[nvcr]: https://github.com/NVIDIA/nvidia-container-runtime/
|
[nvct]: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
|
||||||
|
|
||||||
## Tips
|
## Tips
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user