Using parameter from AVCodecContext to reset qsv codec is more suitable
for MFXVideoENCODE_Reset()'s usage. Per-frame metadata is more suitable
for the usage of mfxEncodeCtrl being passed to
MFXVideoENCODE_EncodeFrameAsync(). Now change it to use the value
from AVCodecContext.
Because q->param is passed to both "in" and "out" parameters when call
MFXVideoENCODE_Query(), the value in q->param may be changed. New
variables are added to store old configuration, so that we can detect
real parameter change.
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Dividing one line log into several av_log() call is not thread safe. Now
merge these strings into one av_log() call.
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
The only duration field currently present in AVFrame is pkt_duration,
which is semantically restricted to those frames that are output by
decoders.
Add a new field that stores the frame's duration without regard for how
that frame was produced. Deprecate pkt_duration.
It need not be writable; in fact, it is often not writable even if
the packet sent to the decoder was writable, because the generic code
calls av_packet_ref() on it. It is never writable if a user
drains the decoder after every packet, because in this case the decode
callback is called from avcodec_send_packet().
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
wrapped_avframe_decode() uses an AVFrame as dst in av_frame_move_ref()
after having called ff_decode_frame_props() to attach side-date
to this very frame. This leaks all the side-data and metadata
that ff_decode_frame_props() has attached.
This happens in various fate-filter-metadata tests since
6ca43a9675.
These particular leaks (which affect metadata-only)
could be fixed by not adding metadata side-data to AVPackets
in libavdevice if they are also available from the AVFrames.
Yet this would break users that extract the metadata from
AVPackets.
The changes to FATE happen because of the way av_dict_set()
works when it overwrites an already existing entry:
It overwrites the entry to be overwritten with the last entry
and adds the new entry at the end. The end result is that
the first entry of the dict is the second-to-last-entry of
the original dict, the last entry of the dict is the last
entry of the old dict and the first count - 2 entries
of the original dict are at positions 1..count - 2 in their
original order.
Reviewed-by: Timo Rothenpieler <timo@rothenpieler.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
and remove FF_CODEC_CAP_INIT_THREADSAFE
All our native codecs are already init-threadsafe
(only wrappers for external libraries and hwaccels
are typically not marked as init-threadsafe yet),
so it is only natural for this to also be the default state.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This is in preparation of switching the default init-thread-safety
to a codec being init-thread-safe.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Streams with all zero sample_delta in 'stts' have all zero dts.
They have higher chance be chose by mov_find_next_sample(), which
leads to seek again and again.
For example, GoPro created a 'GoPro SOS' stream:
Stream #0:4[0x5](eng): Data: none (fdsc / 0x63736466), 13 kb/s (default)
Metadata:
creation_time : 2022-06-21T08:49:19.000000Z
handler_name : GoPro SOS
With 'ffprobe -show_frames http://example.com/gopro.mp4', ffprobe
blocks until all samples in 'GoPro SOS' stream are consumed first.
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
This avoids an extra copy of potentially quite big video frames.
Instead of copying the entire frames data into a rawvideo packet it
packs the frame into a wrapped avframe packet and passes it through
as-is.
Unfortunately, wrapped avframes are set up to be video frames, so the
audio frames continue to be copied.
Additionally, this enabled passing through video frames that previously
were impossible to process, like hardware frames or other special
formats that couldn't be packed into a rawvideo packet.
According to API docs avdevice_list_devices(), avdevice_list_input_sources()
and avdevice_list_input_sinks() should return the number of autodetected
devices on success. This is redundant with AVDeviceInfoList->nb_devices so it
was not noticed earlier that none of the underlying device list functions work
like that.
Let's fix it in generic code to make it in line with the API docs.
Fixes ticket #9820.
Signed-off-by: Marton Balint <cus@passwd.hu>
This patch prevents the libjxl encoder wrapper from failing to
encode images when the input video has untagged primaries. It will
instead assume BT.709/sRGB primaries and print a warning.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
The max height is currently documented as 16; the max difference per
pixel is 255, and a .8h element can easily contain 16*255, thus keep
accumulating in two .8h vectors, and just do the final accumulationat the
end. This should work for heights up to 256.
This requires a minor register renumbering in ff_pix_abs16_xy2_neon.
Before: Cortex A53 A72 A73 Graviton 3
pix_abs_0_0_neon: 97.7 47.0 37.5 22.7
pix_abs_0_1_neon: 154.0 59.0 52.0 25.0
pix_abs_0_3_neon: 179.7 96.7 87.5 41.2
After:
pix_abs_0_0_neon: 96.0 39.2 31.2 22.0
pix_abs_0_1_neon: 150.7 59.7 46.2 23.7
pix_abs_0_3_neon: 175.7 83.7 81.7 38.2
Signed-off-by: Martin Storsjö <martin@martin.st>
Using absolute-difference-accumulate does use twice the amount of
absolute-difference instructions, but avoids the need for the
uaddl and add instructions, reducing the total number of instructions
by 3.
These can be interleaved in the rest of the calculation, to avoid
tight dependencies at the end. Unfortunately, this is marginally
slower on Cortex A53, but faster on A72 and A73.
Before: Cortex A53 A72 A73 Graviton 3
pix_abs_0_3_neon: 175.7 109.2 92.0 41.2
After:
pix_abs_0_3_neon: 179.7 96.7 87.5 41.2
Signed-off-by: Martin Storsjö <martin@martin.st>
Fixes: out of array access
Fixes: 48799/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_LAGARITH_fuzzer-4764457825337344
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This is a per-file input option that adjusts an input's timestamps
with reference to another input, so that emitted packet timestamps
account for the difference between the start times of the two inputs.
Typical use case is to sync two or more live inputs such as from capture
devices. Both the target and reference input source timestamps should be
based on the same clock source.
If either input lacks starting timestamps, then no sync adjustment is made.
Provide neon implementation for pix_abs16_x2 function.
Performance tests of implementation are below.
- pix_abs_0_1_c: 283.5
- pix_abs_0_1_neon: 39.0
Benchmarks and tests run with checkasm tool on AWS Graviton 3.
Signed-off-by: Hubert Mazur <hum@semihalf.com>
Signed-off-by: Martin Storsjö <martin@martin.st>