We need to construct the output format list separatedly from the input
format list, because we need to adhere to two extra requirements:
1. Big-endian output formats are always unsupported (runtime error)
2. Combining 'vulkan' with an explicit out_format that is not supported
by the vulkan frame allocation code is illegal and will crash (abort)
As a free side benefit, this rewrite fixes a possible memory leak in the
`fail` path that was present in the old code.
Signed-off-by: Niklas Haas <git@haasn.dev>
It only works on Linux
$ ffmpeg -loglevel verbose -init_hw_device qsv=intel -f lavfi -i \
yuvtestsrc -vf "format=uyvy422,vpp_qsv=format=nv12" -f null -
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
The SDK supports UYVY from version 1.17, and VPP may support UYVY
input on Linux [1]
$ ffmpeg -loglevel verbose -init_hw_device qsv=intel -f lavfi -i \
yuvtestsrc -vf \
"format=uyvy422,hwupload=extra_hw_frames=32,vpp_qsv=format=nv12" \
-f null -
[1] https://github.com/Intel-Media-SDK/MediaSDK/blob/master/doc/samples/readme-vpp_linux.md
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Since many distributions ship libjxl 0.7.0 still, we'd still prefer to
compile against that, but don't want to lose the features that require
libjxl 0.8.0 or greater. For this reason I've added preprocessor #ifdef
guards around the features that aren't necessarily in libjxl 0.7.0.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
Splits the currently handled subtitle at random access point
packets that can be configured to follow a specific output stream.
Currently only subtitle streams which are directly mapped into the
same output in which the heartbeat stream resides are affected.
This way the subtitle - which is known to be shown at this time
can be split and passed to muxer before its full duration is
yet known. This is also a drawback, as this essentially outputs
multiple subtitles from a single input subtitle that continues
over multiple random access points. Thus this feature should not
be utilized in cases where subtitle output latency does not matter.
Co-authored-by: Andrzej Nadachowski <andrzej.nadachowski@24i.com>
Co-authored-by: Bernard Boulay <bernard.boulay@24i.com>
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
This way we can call process_subtitles without causing the decoded
frame counter to get bumped.
Additionally, this now takes into mention all of the decoded
subtitle frames without fix_sub_duration latency/buffering, or filtering
out decoded reset/end subtitles without any rendered rectangles, which
matches the original intent in 4754345027
.
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
This enables us to later call this when generating additional
subtitles for splitting purposes.
Co-authored-by: Andrzej Nadachowski <andrzej.nadachowski@24i.com>
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
QSVDeintContext and VPPContext have the same base context, and all
features in deinterlace_qsv are implemented in vpp_qsv filter, so
deinterlace_qsv can be taken as a special case of vpp_qsv filter, and we
may use VPPContext with a different option array, preinit callback and
support pixel formats to implement deinterlace_qsv, then remove
QSVDeintContext.
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Like what we did for scale_qsv filter, we use QSVVPPContext as a base
context to manage MFX session for deinterlace_qsv filter.
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
This is used to control the output at frame rate or field rate when
deinterlace is expected and framerate is not specified.
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
The decoder is quite slow with max n taps
Fixes: Timeout
Fixes: 54063/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_BONK_fuzzer-5087362407596032
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
From x86inc:
> On AMD cpus <=K10, an ordinary ret is slow if it immediately follows either
> a branch or a branch target. So switch to a 2-byte form of ret in that case.
> We can automatically detect "follows a branch", but not a branch target.
> (SSSE3 is a sufficient condition to know that your cpu doesn't have this problem.)
x86inc can automatically determine whether to use REP_RET rather than
REP in most of these cases, so impact is minimal. Additionally, a few
REP_RETs were used unnecessary, despite the return being nowhere near a
branch.
The only CPUs affected were AMD K10s, made between 2007 and 2011, 16
years ago and 12 years ago, respectively.
In the future, everyone involved with x86inc should consider dropping
REP_RETs altogether.
libjxl only accepts 16-bit buffers with its API, but it can
accept 9-bit to 15-bit input via a 16-bit buffer, provided the flag
is set declaring the buffer to be of the respective significant depth.
Likewise, it can only provide pixel data on decode as a 16-bit buffer
(if higher than 8) but does provide the metadata tagging the actual bit
depth.
This commit causes libjxlenc.c and libjxldec.c to respect this metadata
and tag/read it accordingly from AVCodecContext->bits_per_raw_sample.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
mfenc sets FF_CODEC_CAP_INIT_CLEANUP, so calling mf_close() on
failure inside mf_init() results in a double-free.
Signed-off-by: Cameron Gutman <aicommander@gmail.com>
Signed-off-by: Martin Storsjö <martin@martin.st>
Only warn if the advanced_editlist option is enabled (it is enabled
by default though) so we don't print one warning for each track, and
demote the warning to AV_LOG_LEVEL_VERBOSE; this message does get
generated whenever parsing a fragmented MP4 file, regardless of
whether the file actually uses multiple edits or not.
Later when parsing the mov structures, the demuxer does warn if
the file did contain multiple edits which would require the
advanced_editlist option enabled for decoding correctly.
Adjust the warning message for the case when the file seemed like it
actually would have needed handling of advanced edit lists, to
reflect the fact that it doesn't help to try set the option as
it has been automatically disabled.
Signed-off-by: Martin Storsjö <martin@martin.st>
The construct of using offsetof on a (potentially anonymous) struct
defined within the offsetof expression, while supported by all
current compilers, has been declared explicitly undefined by the
C standards committee [1].
Clang recently got a change to identify this as an issue [2];
initially it was treated as a hard error, but it was soon after
softened into a warning under the -Wgnu-offsetof-extensions option
(not enabled automatically as part of -Wall though).
Nevertheless - in this particular case, it's trivial to fix the
code not to rely on the construct that the standards committee has
explicitly called out as undefined.
[1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2350.htm
[2] https://reviews.llvm.org/D133574
Signed-off-by: Martin Storsjö <martin@martin.st>
If the dictionary provided on input contains multiple entries for an
option (relevant for flags modifying the previous value with '+' or
'-') and the option is not found in the target object, only the last
entry would be returned to the caller.
Pass AV_DICT_MULTIKEY to av_dict_set() to make sure all such entries are
returned.
QSVScaleContext and VPPContext have the same base context, and all
features in scale_qsv are implemented in vpp_qsv filter, so scale_qsv
can be taken as a special case of vpp_qsv filter, and we may use
VPPContext with a different option array, preinit callback and supported
pixel formats to implement scale_qsv then remove QSVScaleContext
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Use QSVVPPContext as a base context of QSVScaleContext, hence we may
re-use functions defined for QSVVPPContext to manage MFX session for
scale_qsv filter.
In addition, system memory has been taken into account in
QSVVVPPContext, we may add support for non-QSV pixel formats in the
future.
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Hyper Encode uses Intel integrated and discrete graphics on one system
to accelerate encoding of a single video stream.
Depending on the selected parameters and codecs, performance gain on AlderLake iGPU + ARC Gfx up to 1.6x.
More information: https://www.intel.co.uk/content/www/uk/en/architecture-and-technology/adaptix/deep-link.html
Developer guide: https://github.com/oneapi-src/oneVPL-intel-gpu/blob/main/doc/HyperEncode_FeatureDeveloperGuide.md
Hyper Encode is supported only on Windows and requires D3D11 and oneVPL.
To enable Hyper Encode need to specify:
-Hyper Encode mode (-dual_gfx on or dual_gfx adaptive)
-Encoder: h264_qsv or hevc_qsv
-BRC: VBR, CQP or ICQ
-Lowpower mode (-low_power 1)
-Closed GOP for AVC or strict GOP for HEVC -idr_interval = 0 used by default
Depending on the encoding parameters, the following parameters may need
to be adjusted:
-g recommended >= 30 for better performance
-async_depth recommended >= 30 for better performance
-extra_hw_frames recommended equal to async_depth value
-bf recommended = 0 for better performance
In the cases with fast encoding (-preset veryfast) there may be no
performance gain due to the fact that the decode is slower than the encode.
Command line examples:
ffmpeg.exe -init_hw_device qsv:hw,child_device_type=d3d11va,child_device=0 -v verbose -y -hwaccel qsv -extra_hw_frames 60 -async_depth 60 -c:v h264_qsv -i bbb_sunflower_2160p_60fps_normal.mp4
-async_depth 60 -c:v h264_qsv -preset medium -g 60 -low_power 1 -bf 0 -dual_gfx on output.h265
Signed-off-by: galinart <artem.galin@intel.com>