RDFTs are full of conventions that vary between implementations.
What I've gone for here is what's most common between
both fftw, avcodec's rdft and what we use, the equivalent of
which is DFT_R2C for forward and IDFT_C2R for inverse. The
other 2 conventions (IDFT_R2C and DFT_C2R) were not used at
all in our code, and their names are also not appropriate.
If there's a use for either, we can easily add a flag which
would just flip the sign on one exptab.
For some unknown reason, possibly to allow reusing FFT's exp tables,
av_rdft's C2R output is 0.5x lower than what it should be to ensure
a proper back-and-forth conversion.
This code outputs its real samples at the correct level, which
matches FFTW's level, and allows the user to change the level
and insert arbitrary multiplies for free by setting the scale option.
This commit rewrites the internal transform code into a constructor
that stitches transforms (codelets).
This allows for transforms to reuse arbitrary parts of other
transforms, and allows transforms to be stacked onto one
another (such as a full iMDCT using a half-iMDCT which in turn
uses an FFT). It also permits for each step to be individually
replaced by assembly or a custom implementation (such as an ASIC).
qHD is 960x540 (q stands for quarter) and QHD is 2560x1440 (Q is quad).
use quadhd for QHD for abbreviation.
Fix ticket#9591
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
Trying to write too much will currently overwrite previous data. Trying
to read too much will either av_assert2() in av_fifo_drain() or return
old data. Trying to peek too much will either av_assert2() in
av_fifo_generic_peek_at() or return old data.
Return an error code in all these cases, which is safer and more
consistent.
It returns a pointer inside the fifo's buffer, which cannot be safely
used without accessing AVFifoBuffer internals. It is easier and safer to
use av_fifo_generic_peek_at().
The test /libavutil/tests/hwdevice checks that when deriving a device
from a source device and then deriving back to the type of the source
device, the result is matching the original source device, i.e. the
derivation mechanism doesn't create a new device in this case.
Previously, this test was usually passed, but only due to two different
kind of flaws:
1. The test covers only a single level of derivation (and back)
It derives device Y from device X and then Y back to the type of X and
checks whether the result matches X.
What it doesn't check for, are longer chains of derivation like:
CUDA1 > OpenCL2 > CUDA3 and then back to OpenCL4
In that case, the second derivation returns the first device (CUDA3 ==
CUDA1), but when deriving OpenCL4, hwcontext.c was creating a new
OpenCL4 context instead of returning OpenCL2, because there was no link
from CUDA1 to OpenCL2 (only backwards from OpenCL2 to CUDA1)
If the test would check for two levels of derivation, it would have
failed.
This patch fixes those (yet untested) cases by introducing forward
references (derived_device) in addition to the existing back references
(source_device).
2. hwcontext_qsv didn't properly set the source_device
In case of QSV, hwcontext_qsv creates a source context internally
(vaapi, dxva2 or d3d11va) without calling av_hwdevice_ctx_create_derived
and without setting source_device.
This way, the hwcontext test ran successful, but what practically
happened, was that - for example - deriving vaapi from qsv didn't return
the original underlying vaapi device and a new one was created instead:
Exactly what the test is intended to detect and prevent. It just
couldn't do so, because the original device was hidden (= not set as the
source_device of the QSV device).
This patch properly makes these setting and fixes all derivation
scenarios.
(at a later stage, /libavutil/tests/hwdevice should be extended to check
longer derivation chains as well)
Reviewed-by: Lynne <dev@lynne.ee>
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Tested-by: Wenbin Chen <wenbin.chen@intel.com>
Signed-off-by: softworkz <softworkz@hotmail.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
This is done a second time for 5.0 because master was
merged into 5.0 so that it contains the recent DOVI additions.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In order to be able to extend this struct later (as the Dolby Vision RPU
evolves), all of the 'container' structs are considered extensible, and
the individual constituent fields must instead be accessed via offsets.
The precedent for this style of access is set in
<libavutil/detection_bbox.h>
Signed-off-by: Niklas Haas <git@haasn.dev>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
We don't use it. Was copied from libplacebo's recommended defaults.
Creates problems with validation on Intel devices, where the driver
still advertizes it, even though it's not usable without a swapchain.
Fix#7830
When we upload a frame that is not padded as MSDK requires, we create a
new AVFrame to copy data. The frame's padding data is uninitialized so
it brings run to run problem. For example, If we run the following
command serveral times we will get different outputs.
ffmpeg -init_hw_device qsv=qsv:hw -qsv_device /dev/dri/renderD128 \
-filter_hw_device qsv -f rawvideo -s 192x200 -pix_fmt p010 \
-i 192x200_P010.yuv -vf "format=nv12,hwupload=extra_hw_frames=16" \
-c:v hevc_qsv output.265
According to https://github.com/Intel-Media-SDK/MediaSDK/blob/master/doc/mediasdk-man.md#encoding-procedures
"Note: It is the application's responsibility to fill pixels outside
of crop window when it is smaller than frame to be encoded. Especially
in cases when crops are not aligned to minimum coding block size (16
for AVC, 8 for HEVC and VP9)"
I add a function to fill padding area with border pixel to fix this
run2run problem, and also move the new AVFrame to global structure
to reduce redundant allocation operation to increase preformance.
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
The previous implementation swapped the two halves of the plaintext. The
existing tests only decrypted data with a plaintext of all zeroes, which is
not affected by swapping the halves. Tests which detect the old buggy behavior
have been added.
Signed-off-by: Sebastian Kirmayer <ffmpeg@kirmayer.eu>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
LSX and LASX is loongarch SIMD extention.
They are enabled by default if compiler support it, and can be disabled
with '--disable-lsx' '--disable-lasx'.
Change-Id: Ie2608ea61dbd9b7fffadbf0ec2348bad6c124476
Reviewed-by: Shiyou Yin <yinshiyou-hf@loongson.cn>
Reviewed-by: guxiwei <guxiwei-hf@loongson.cn>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Always require one semaphore per sw_format plane. This is what
the implementation uses and relies upon throughout. This was
a leftover from an earlier revision that was never needed.
When vulkan image exports to drm, the tilling need to be
VK_IMAGE_TILING_DRM_FORMAT_MODIFIER_EXT. Now add code to create vulkan
image using this format.
Now the following command line works:
ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format \
vaapi -i input_1080p.264 -vf "hwmap=derive_device=vulkan,format=vulkan, \
scale_vulkan=1920:1080,hwmap=derive_device=vaapi,format=vaapi" -c:v h264_vaapi output.264
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Further-modifications-by: Lynne <dev@lynne.ee>
Add support to map vulkan frames to software frames when
using contiguous_planes flag.
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Further-modifications-by: Lynne <dev@lynne.ee>
VAAPI on Intel can import external frame, but the planes of the external
frames should be in the same drm object. A new option "contiguous_planes"
is added to device. This flag tells device to allocate places in one
memory. When device is derived from vaapi this flag will be enabled.
A new flag frame_flag is also added to AVVulkanFramesContext. User
can use this flag to force enable or disable this behaviour.
A new variable "offset "is added to AVVKFrame. It describe describe the
offset from the memory currently bound to the VkImage.
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Further-modifications-by: Lynne <dev@lynne.ee>
This way we can pass explicit modifiers in. Sometimes the
modifier matters for the number of memory planes that
libva accepts, in particular when dealing with
driver-compressed textures. Furthermore the driver might
not actually be able to determine the implicit modifier
if all the buffer-passing has used explicit modifier.
All these issues should be resolved by passing in the
modifier, and for that we switch to using the PRIME_2
memory type.
Tested with experimental radeonsi patches for modifiers
and kmsgrab. Also tested with radeonsi without the
patches to double-check it works without PRIME_2 support.
v2:
Cache PRIME_2 support to avoid doing two calls every time on
libva drivers that do not support it.
v3:
Remove prime2_vas usage.
Signed-off-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Currently, it also tests whether extended_data points to something
different than the AVFrame's data array and frees extended_data
if it is different. Yet this is only necessary for one of its three
callers, namely av_frame_unref(); meanwhile the other two callers
took measures to avoid this (or rather, to make it to an av_free(NULL)).
This commit moves this chunk to av_frame_unref() (so that
get_frame_defaults() now treats its input as uninitialized)
and removes the now superfluous code in the other two callers.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The data stored in data[3] in VAAPI AVFrame is VASurfaceID while
the data stored in pair->first is the pointer of VASurfaceID, so
we need to do cast to make following commandline works:
ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 \
-hwaccel_output_format vaapi -i input.264 \
-vf "hwmap=derive_device=qsv,format=qsv" -c:v h264_qsv output.264
Signed-off-by: nyanmisaka <nst799610810@gmail.com>
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>