This commit adds new code paths for vscale when filterSize is 2, 4, or
8. By using specialized code with unrolling to match the filterSize we
can improve performance.
On AWS c7g (Graviton 3, Neoverse V1) instances:
before after
yuv2yuvX_2_0_512_accurate_neon: 558.8 268.9
yuv2yuvX_4_0_512_accurate_neon: 637.5 434.9
yuv2yuvX_8_0_512_accurate_neon: 1144.8 806.2
yuv2yuvX_16_0_512_accurate_neon: 2080.5 1853.7
Signed-off-by: Jonathan Swinney <jswinney@amazon.com>
Signed-off-by: Martin Storsjö <martin@martin.st>
Use scalar times vector multiply accumlate instructions instead of
vector times vector to remove the need for replicating load instructions
which are slightly slower.
On AWS c7g (Graviton 3, Neoverse V1) instances:
yuv2yuvX_8_0_512_accurate_neon: 1144.8 987.4
yuv2yuvX_16_0_512_accurate_neon: 2080.5 1869.4
Signed-off-by: Jonathan Swinney <jswinney@amazon.com>
Signed-off-by: Martin Storsjö <martin@martin.st>
Change the reference to exactly match the C reference in swscale,
instead of exactly matching the x86 SIMD implementations (which
differs slightly). Test with and without SWS_ACCURATE_RND - if this
flag isn't set, the output must match the C reference exactly,
otherwise it is allowed to be off by 2.
Mark a couple x86 functions as unavailable when SWS_ACCURATE_RND
is set - apparently this discrepancy hasn't been noticed in other
exact tests before.
Add a test for yuv2plane1.
Signed-off-by: Jonathan Swinney <jswinney@amazon.com>
Signed-off-by: Martin Storsjö <martin@martin.st>
Use it instead of AVStream.codecpar in the main thread. While
AVStream.codecpar is documented to only be updated when the stream is
added or avformat_find_stream_info(), it is actually updated during
demuxing. Accessing it from a different thread then constitutes a race.
Ideally, some mechanism should eventually be provided for signalling
parameter updates to the user. Then the demuxing thread could pick up
the changes and propagate them to the decoder.
This specialization handles the case where filtersize is 4 mod 8, e.g.
12, 20, etc. Aarch64 was previously using the c function for this case.
This implementation speeds up that case significantly.
hscale_8_to_15__fs_12_dstW_512_c: 6234.1
hscale_8_to_15__fs_12_dstW_512_neon: 1505.6
Signed-off-by: Jonathan Swinney <jswinney@amazon.com>
Signed-off-by: Martin Storsjö <martin@martin.st>
frag_stream_info->index_entry isn't the first sample/trun index.
cenc.frag_index_entry_base failed to catch the case since
current_index > 0.
Fix ticket #9807.
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
frag_index.current is used by cenc_filter, and is updated inside
mov_read_moof. It can out of sync regarding to mov_read_packet.
Partly fix ticket #9807.
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Convert the input from a scatter to a gather instead,
which is faster and better for SIMD.
Also, add a pre-shuffled exptab version to avoid
gathering there at all. This doubles the exptab size,
but the speedup makes it worth it. In SIMD, the
exptab will likely be purged to a higher cache
anyway because of the FFT in the middle, and
the amount of loads stays identical.
For a 960-point inverse MDCT, the speedup is 10%.
This makes it possible to write sane and fast SIMD
versions of inverse MDCTs.
A gateway can see everything, and we should not be shipping a hardcoded
default from a third party company; it's a security risk.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
mpegvideo uses an array of Pictures and when it is done with using
them, it only unreferences them incompletely: Some buffers are kept
so that they can be reused lateron if the same slot in the Picture
array is reused, making this a sort of a bufferpool.
(Basically, a Picture is considered used if the AVFrame's buf is set.)
Yet given that other pieces of the decoder may have a reference to
these buffers, they need not be writable and are made writable using
av_buffer_make_writable() when preparing a new Picture. This involves
reading the buffer's data, although the old content of the buffer
need not be retained.
Worse, this read can be racy, because the buffer can be used by another
thread at the same time. This happens for Real Video 3 and 4.
This commit fixes this race by no longer copying the data;
instead the old buffer is replaced by a new, zero-allocated buffer.
(Here are the details of what happens with three or more decoding threads
when decoding rv30.rm from the FATE-suite as happens in the rv30 test:
The first decoding thread uses the first slot of its picture array
to store its current pic; update_thread_context copies this for the
second thread that decodes a P-frame. It uses the second slot in its
Picture array to store its P-frame. This arrangement is then copied
to the third decode thread, which decodes a B-frame. It uses the third
slot in its Picture array for its current frame.
update_thread_context copies this to the next thread. It unreferences
the third slot containing the other B-frame and then it reuses this
slot for its current frame. Because the pic array slots are only
incompletely unreferenced, the buffers of the previous B-frame are
still in there and they are not writable; in fact the previous
thread is concurrently writing to them, causing races when making
the buffer writable.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
At this point active_thread_type is set iff active_thread_type
is set to FF_THREAD_FRAME iff AVCodecInternal.frame_thread_encoder
is set.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Discontinuity detection/correction is left in the main thread, as it is
entangled with InputStream.next_dts and related variables, which may be
set by decoding code.
Fixes races e.g. in fate-ffmpeg-streamloop after
aae9de0cb2.
This will allow to move normal offset handling to demuxer thread, since
discontinuities currently have to be processed in the main thread, as
the code uses some decoder-produced values.
InputFile.ts_offset can change during transcoding, due to discontinuity
correction. This should not affect the streamcopy starting timestamp.
Cf. bf2590aed3
The AviSynth C API requires using avs_release_video_frame
whenever avs_get_frame has been used, but the recent addition
of frameprop reading to the demuxer was missing this in
avisynth_create_stream_video.
Signed-off-by: Stephen Hutchinson <qyot27@gmail.com>
This allows user to build FFmpeg against Intel oneVPL. oneVPL 2.6
is the required minimum version when building Intel oneVPL code.
It will fail to run configure script if both libmfx and libvpl are
enabled.
It is recommended to use oneVPL for new work, even for currently available
hardwares [1]
Note the preferred child device type is d3d11va for libvpl on Windows.
The commands below will use d3d11va if d3d11va is available on Windows.
$ ffmpeg -hwaccel qsv -c:v h264_qsv ...
$ ffmpeg -qsv_device 0 -hwaccel qsv -c:v h264_qsv ...
$ ffmpeg -init_hw_device qsv=qsv:hw_any -hwaccel qsv -c:v h264_qsv ...
$ ffmpeg -init_hw_device qsv=qsv:hw_any,child_device=0 -hwaccel qsv -c:v h264_qsv ...
User may use child_device_type option to specify child device type to
dxva2 or derive a qsv device from a dxva2 device
$ ffmpeg -init_hw_device qsv=qsv:hw_any,child_device=0,child_device_type=dxva2 -hwaccel qsv -c:v h264_qsv ...
$ ffmpeg -init_hw_device dxva2=d3d9:0 -init_hw_device qsv=qsv@d3d9 -hwaccel qsv -c:v h264_qsv ...
[1] https://www.intel.com/content/www/us/en/develop/documentation/upgrading-from-msdk-to-onevpl/top.html
If qsv hwdevice is available, use the mfxLoader handle in qsv hwdevice
to create mfx session. Otherwise create mfx session with a new mfxLoader
handle.
This is in preparation for oneVPL support
In oneVPL, MFXLoad() and MFXCreateSession() are required to create a
workable mfx session[1]
Add config filters for D3D9/D3D11 session (galinart)
The default device is changed to d3d11va for oneVPL when both d3d11va
and dxva2 are enabled on Microsoft Windows
This is in preparation for oneVPL support
[1] https://spec.oneapi.io/versions/latest/elements/oneVPL/source/programming_guide/VPL_prg_session.html#onevpl-dispatcher
Co-authored-by: galinart <artem.galin@intel.com>
Signed-off-by: galinart <artem.galin@intel.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
The following Cflags has been added to libmfx.pc, so mfx/ prefix is no
longer needed when including mfx headers in FFmpeg.
Cflags: -I${includedir} -I${includedir}/mfx
Some old versions of libmfx have the following Cflags in libmfx.pc
Cflags: -I${includedir}
We may add -I${includedir}/mfx to CFLAGS when running 'configure
--enable-libmfx' for old versions of libmfx, if so, mfx headers without
mfx/ prefix can be included too.
If libmfx comes without pkg-config support, we may do a small change to
the settings of the environment(e.g. set -I/opt/intel/mediasdk/include/mfx
instead of -I/opt/intel/mediasdk/include to CFLAGS), then the build can
find the mfx headers without mfx/ prefix
After applying this change, we won't need to change #include for mfx
headers when mfx headers are installed under a new directory.
This is in preparation for oneVPL support (mfx headers in oneVPL are
installed under vpl directory)
The data structures for VP9 in mfxvp9.h is wrapped by
MFX_VERSION_NEXT, which means those data structures have never been used
in a public release. Actually MFX_CODEC_VP9 and other VP9 stuffs are
added in mfxstructures.h. In addition, mfxdefs.h is included in
mfxvp9.h, so we may use the check in this patch for MFX_CODEC_VP9
This is in preparation for oneVPL support because mfxvp9.h is removed
from oneVPL [1]
[1]: https://github.com/oneapi-src/oneVPL
Intel's oneVPL is a successor to MediaSDK, but removed some obsolete
features of MediaSDK[1], some early versions of oneVPL still use libmfx
as library name[2]. However some of obsolete features, including OPAQUE
memory, multi-frame encode, user plugins and LA_EXT rate control mode
etc, have been enabled in QSV, so user can not use --enable-libmfx to
enable QSV if using an early version of oneVPL SDK. In order to ensure
user builds FFmpeg against a right version of libmfx, this patch added a
check for version < 2.0 and warning message about the used obsolete
features.
[1] https://spec.oneapi.io/versions/latest/elements/oneVPL/source/VPL_intel_media_sdk.html
[2] https://github.com/oneapi-src/oneVPL
It is the proper place to set it, directly besides mb_width and
mb_stride. The reason for doing it the way it is done now seems
to be that the code does not create more slice contexts than necessary
(i.e. not more than one per row), so that this number needs to be
known before setting the number of slices. But this can always be
arranged by just moving the code that sets the number of slices.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>