The "AYUV" format is defined by Microsoft as their preferred format for
4:4:4 content, and so it is the format used by Intel VAAPI and QSV.
As Microsoft like to define their byte ordering in little-endian
fashion, the memory order is reversed, and so our pix_fmt, which
follows memory order, has a reversed name (VUYA).
The only duration field currently present in AVFrame is pkt_duration,
which is semantically restricted to those frames that are output by
decoders.
Add a new field that stores the frame's duration without regard for how
that frame was produced. Deprecate pkt_duration.
This commit moves some of the functionality from avfilter/colorspace
into avutil/csp and exposes it as a public API so it can be used by
libavcodec and/or libavformat. It also converts those structs from
double values to AVRational to make regression testing easier and
more consistent.
Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>
Since every DLL can use an individual CRT on Windows, having
an exported function that opens a FILE* won't work if that
FILE* is going to be used from a different DLL (or from user
application code).
Internally within the libraries, the issue can be worked around
by duplicating the function in all libraries (this already happened
implicitly because the function resided in file_open.c) and renaming
the function to ff_fopen_utf8 (so that it doesn't end up exported from
the DLLs) and duplicating it in all libraries that use it.
This makes the avpriv_fopen_utf8 / ff_fopen_utf8 function work in
the exact same way as the existing avpriv_open / ff_open, with the
same setup as introduced in e743e7ae6e.
That mechanism doesn't work for external users, thus deprecate the
existing function.
Signed-off-by: Martin Storsjö <martin@martin.st>
The new API is more extensible and allows for custom layouts.
More accurate information is exported, eg for decoders that do not
set a channel layout, lavc will not make one up for them.
Deprecate the old API working with just uint64_t bitmasks.
Expanded and completed by Vittorio Giovara <vittorio.giovara@gmail.com>
and James Almer <jamrial@gmail.com>.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Users should switch to the superior AVFifo API.
Unfortunately AVFifoBuffer fields cannot be marked as deprecated because
it would trigger a warning wherever fifo.h is #included, due to
inlined av_fifo_peek2().
Many AVFifoBuffer users operate on fixed-size elements (e.g. pointers),
but the current FIFO API deals exclusively in bytes, requiring extra
complexity in all these callers.
Add a new AVFifo API creating a FIFO with an element size
that may be larger than a byte. All operations on such a FIFO then
operate on complete elements.
This API does not reuse AVFifoBuffer and its API at all, but instead uses
an opaque struct called AVFifo. The AVFifoBuffer API will be deprecated
in a future commit once all of its users have been switched to the new
API.
Not reusing AVFifoBuffer also allowed to use the full range of size_t
from the beginning.
It returns a pointer inside the fifo's buffer, which cannot be safely
used without accessing AVFifoBuffer internals. It is easier and safer to
use av_fifo_generic_peek_at().
This is done a second time for 5.0 because master was
merged into 5.0 so that it contains the recent DOVI additions.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In order to be able to extend this struct later (as the Dolby Vision RPU
evolves), all of the 'container' structs are considered extensible, and
the individual constituent fields must instead be accessed via offsets.
The precedent for this style of access is set in
<libavutil/detection_bbox.h>
Signed-off-by: Niklas Haas <git@haasn.dev>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
There is no reason to wrap them in #ifndef guards, they should only be
defined here and nowhere else. The define guards just add the
possibility to accidentally use the same FF_API name in different
libraries.
The new format (given in big/little endian forms) matches the
existing X2RGB10 format, except with B and R channels switched.
AV_PIX_FMT_X2BGR10 data often is created by OpenGL programs
whose buffers use the GL_RGB10 internal format.
Signed-off-by: Manuel Stoeckl <code@mstoeckl.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
It does the same as av_calloc(), so one of them should be removed.
Given that av_calloc() has the shorter name, it is retained.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Microsoft VideoProcessor requires texture with D3DUSAGE_RENDERTARGET flag as output.
There is no way to allocate array of textures with D3D11_BIND_RENDER_TARGET flag
and .ArraySize > 2 by ID3D11Device_CreateTexture2D due to the Microsoft limitation.
Adding AVD3D11FrameDescriptors array to store array of single textures
instead of texture with multiple slices resolves this.
Signed-off-by: Artem Galin <artem.galin@intel.com>
In particular, document that av_opt_copy() always disentangles
allocated options even on error; this guarantee is needed to e.g.
properly free duplicated thread contexts in libavcodec on error.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The reason why the generic av_image_copy_uc_from() doesn't really
fit in the case for Vulkan is because some planes may be copied via
other methods (such as mapping GPU memory), and if they don't satisfy
the strict alignment requirements, a gpu image->gpu buffer->cpu ram
copy is performed.
We need this for hwcontext_vulkan, and I think this will also be
useful to API users like libplacebo who would rather not write
a custom SIMD memcpy.
common.h currently contains several things: Math macros, UTF-8 macros,
other fundamental macros; furthermore it also contains miscellaneous
math functions and it (directly and indirectly) includes lots of other
headers.
This commit moves the "other fundamental macros" to macros.h which is
a more fitting place.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Teach AV_HWDEVICE_TYPE_VIDEOTOOLBOX to be able to create AVFrames of type
AV_PIX_FMT_VIDEOTOOLBOX. This can be used to hwupload a regular AVFrame
into its CVPixelBuffer equivalent.
ffmpeg -init_hw_device videotoolbox -f lavfi -i color=black:640x480 -vf hwupload -c:v h264_videotoolbox -f null -y /dev/null
Signed-off-by: Aman Karmani <aman@tmm1.net>
Originally deprecated in 1296b1f6c0631ab79464e22d48a6a1548450b943;
scheduled again for removal in a991526832.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
av_set_cpu_flags_mask() has been deprecated in the commit which merged
it: 6df42f98746be06c883ce683563e07c9a2af983f; av_parse_cpu_flags() has
been deprecated in 4b529edff8.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
av_adler32_update() is used by av_hash_update() which will be switched
to size_t at the next bump. So it also has to be made to use size_t.
This is also necessary for framecrcenc.c, because the size of side data
will become a size_t, too.
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Base escaping only escapes values required for base character data
according to part 2.4 of XML, and if additional flags are added
single and double quotes can additionally be escaped in order
to handle single and double quoted attributes.
Co-authored-by: Jan Ekström <jan.ekstrom@24i.com>
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
This commit adds support for in-place FFT transforms. Since our
internal transforms were all in-place anyway, this only changes
the permutation on the input.
Unfortunately, research papers were of no help here. All focused
on dry hardware implementations, where permutes are free, or on
software implementations where binary bloat is of no concern so
storing dozen times the transforms for each permutation and version
is not considered bad practice.
Still, for a pure C implementation, it's only around 28% slower
than the multi-megabyte FFTW3 in unaligned mode.
Unlike a closed permutation like with PFA, split-radix FFT bit-reversals
contain multiple NOPs, multiple simple swaps, and a few chained swaps,
so regular single-loop single-state permute loops were not possible.
Instead, we filter out parts of the input indices which are redundant.
This allows for a single branch, and with some clever AVX512 asm,
could possibly be SIMD'd without refactoring.
The inplace_idx array is guaranteed to never be larger than the
revtab array, and in practice only requires around log2(len) entries.
The power-of-two MDCTs can be done in-place as well. And it's
possible to eliminate a copy in the compound MDCTs too, however
it'll be slower than doing them out of place, and we'd need to dirty
the input array.
This patch adds support for arbitrary-point FFTs and all even MDCT
transforms.
Odd MDCTs are not supported yet as they're based on the DCT-II and DCT-III
and they're very niche.
With this we can now write tests.
Do it only when requested with the AV_CODEC_EXPORT_DATA_VIDEO_ENC_PARAMS
flag.
Drop previous code using the long-deprecated AV_FRAME_DATA_QP_TABLE*
API. Temporarily disable fate-filter-pp, fate-filter-pp7,
fate-filter-spp. They will be reenabled once these filters are converted
in following commits.
This patch introduces a new frame side data type AVFilmGrainParams for use
with video codecs which support it.
It can save a lot of memory used for duplicate processed reference frames and
reduce copies when applying film grain during presentation.
A common pattern e.g. in libavcodec is replacing/updating buffer
references: unref old one, ref new one. This function allows simplifying
such code and avoiding unnecessary refs+unrefs if the references are
already equivalent.
Requires some extraneous top side and bottom front channels to be
defined.
According to STD-B59v2, the defined channel layout is:
- FL
- FR
- FC
- LFE1
- BL
- BR
- FLc
- FRc
- BC
- LFE2
- SiL
- SiR
- TpFL
- TpFR
- TpFC
- TpC
- TpBL
- TpBR
- TpSiL
- TpSiR
- TpBC
- BtFC
- BtFL
- BtFR
This utility helps avoid undefined behavior when doing things like
checking how much memory we need to allocate for an image before we have
allocated a buffer.
Signed-off-by: Brian Kim <bkkim@google.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Use opaque iteration state instead of the previous child class. This
mirrors similar changes done in lavf/lavc.
Deprecate the av_opt_child_class_next() API.
This allows for users who derive devices to set options for the
new device context they derive.
The main use case of this is to allow users to enable extensions
(such as surface drawing extensions) in Vulkan while deriving from
the device their frames are on. That way, users don't need to write
any initialization code themselves, since the Vulkan spec invalidates
mixing instances, physical devices and active devices.
Apart from Vulkan, other hwcontexts ignore the opts argument since they
don't support options at all (or in VAAPI and OpenCL's case, options are
currently only used for device selection, which device_derive overrides).
This will be used for AVCodecContext->profile. By specifying constants in the
encoders we won't have to use the common AVCodecContext options table and
different encoders can use the same profile name even with different values.
Signed-off-by: Marton Balint <cus@passwd.hu>
This is intended to replace the deprecated the AV_FRAME_DATA_QP_TABLE*
API and extend it to a wider range of codecs.
In the future, it may also be extended to support other encoding
parameters such as motion vectors.
Additional changes by Anton Khirnov <anton@khirnov.net> with suggestions
by Lynne <dev@lynne.ee>.
Signed-off-by: Juan De León <juandl@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This solves a huge oversight - it lets users reliably use their own
AVVulkanDeviceContext. Otherwise, the extensions supplied and enabled
are not discoverable by anything outside of hwcontext_vulkan.
Also clarifies that any user-supplied VkInstance must be at least 1.1.
bump minor version for DOVI sidedata, because added the dovi_meta.h
as lavu API part. Also update APIchanges.
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
Required minimal changes to the code so made sense to implement.
FFT and MDCT tested, the output of both was properly rounded.
Fun fact: the non-power-of-two fixed-point FFT and MDCT are the fastest ever
non-power-of-two fixed-point FFT and MDCT written.
This can replace the power of two integer MDCTs in aac and ac3 if the
MIPS optimizations are ported across.
Unfortunately the ac3 encoder uses a 16-bit fixed point forward transform,
unlike the encoder which uses a 32bit inverse transform, so some modifications
might be required there.
The 3-point FFT is somewhat less accurate than it otherwise could be,
having minor rounding errors with bigger transforms. However, this
could be improved later, and the way its currently written is the way one
would write assembly for it.
Similar rounding errors can also be found throughout the power of two FFTs
as well, though those are more difficult to correct.
Despite this, the integer transforms are more than accurate enough.
Compared to ad-hoc if(printed) ... code this allows the user to disable
it by adjusting the log level
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
In order to access the original opaque parameter of a buffer in the buffer
pool. (The buffer pool implementation overrides the normal opaque parameter but
also saves it so it is accessible).
v2: add assertion check before dereferencing the BufferPoolEntry.
Signed-off-by: Marton Balint <cus@passwd.hu>
1)Some filters allow cross-referenced expressions e.g. x=y+10. In
such cases, filters evaluate expressions multiple times for
successful evaluation of all expressions. If the expression for one or
more variables contains a RNG, the result may vary across evaluation
leading to inconsistent values across the cross-referenced expressions.
2)A related case is circular expressions e.g. x=y+10 and y=x+10 which
cannot be succesfully resolved.
3)Certain filter variables may only be applicable in specific eval modes
and lead to a failure of evaluation in other modes e.g. pts is only
relevant for frame eval mode.
At present, there is no reliable means to identify these occurrences and
thus the error messages provided are broad or inaccurate. The helper
function introduced - av_expr_count_vars - allows developers to identify
the use and count of variables in expressions and thus tailor the error
message, allow for a graceful fallback and/or decide evaluation order.
This is an alias for JEDEC P22.
The name associated with the value is also changed
from jedec-p22 to ebu3213 to match ITU-T H.273.
Signed-off-by: Raphaël Zumer <rzumer@tebako.net>
Signed-off-by: James Almer <jamrial@gmail.com>
Simply moves and templates the actual transforms to support an
additional data type.
Unlike the float version, which is equal or better than libfftw3f,
double precision output is bit identical with libfftw3.
FF_DECODE_ERROR_CONCEALMENT_ACTIVE is set when the decoded frame has error(s) but the returned value from
avcodec_receive_frame is zero i.e. concealed errors
Signed-off-by: Amir Pauker <amir@livelyvideo.tv>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
These are the 4:4:4 variants of the semi-planar NV12/NV21 formats.
These formats are not used much, so we've never had a reason to add
them until now. VDPAU recently added support HEVC 4:4:4 content
and when you use the OpenGL interop, the returned surfaces are in
NV24 format, so we need the pixel format for media players, even
if there's no direct use within ffmpeg.
Separately, there are apparently webcams that use NV24, but I've
never seen one.
New VdpYCbCr Formats VDP_YCBCR_FORMAT_Y_U_V_444 and,
VDP_YCBCR_FORMAT_Y_UV_444 have been added in VDPAU with libvdpau-1.2
to be used in get/putbits for YUV 4:4:4 surfaces. Earlier mapping of
AV_PIX_FMT_YUV444P to VDP_YCBCR_FORMAT_YV12 is not valid.
Hence this Change maps AV_PIX_FMT_YUV444P to VDP_YCBCR_FORMAT_Y_U_V_444
to access the YUV 4:4:4 surface via read-back API's of VDPAU.
The encoders such as libx264 support different QPs offset for different MBs,
it makes possible for ROI-based encoding. It makes sense to add support
within ffmpeg to generate/accept ROI infos and pass into encoders.
Typical usage: After AVFrame is decoded, a ffmpeg filter or user's code
generates ROI info for that frame, and the encoder finally does the
ROI-based encoding.
The ROI info is maintained as side data of AVFrame.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
The dynamic metadata contains data for color volume transform -
application 4 of SMPTE 2094-40:2016 standard. The data comes from
HEVC in the SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This was marked as deprecated (but only in the doxygen, not with an
actual deprecation attribute) in 81c623fae0 in 2011, but was
undeprecated in ad1ee5fa7.
Signed-off-by: Martin Storsjö <martin@martin.st>
This is needed because of 32bit float formats (which are difficult to
store in 16bits)
This also fixes undefined behavior found by fate
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
PSEUDOPAL pixel formats are not paletted, but carried a palette with the
intention of allowing code to treat unpaletted formats as paletted. The
palette simply mapped the byte values to the resulting RGB values,
making it some sort of LUT for RGB conversion.
It was used for 1 byte formats only: RGB4_BYTE, BGR4_BYTE, RGB8, BGR8,
GRAY8. The first 4 are awfully obscure, used only by some ancient bitmap
formats. The last one, GRAY8, is more common, but its treatment is
grossly incorrect. It considers full range GRAY8 only, so GRAY8 coming
from typical Y video planes was not mapped to the correct RGB values.
This cannot be fixed, because AVFrame.color_range can be freely changed
at runtime, and there is nothing to ensure the pseudo palette is
updated.
Also, nothing actually used the PSEUDOPAL palette data, except xwdenc
(trivially changed in the previous commit). All other code had to treat
it as a special case, just to ignore or to propagate palette data.
In conclusion, this was just a very strange old mechnaism that has no
real justification to exist anymore (although it may have been nice and
useful in the past). Now it's an artifact that makes the API harder to
use: API users who allocate their own pixel data have to be aware that
they need to allocate the palette, or FFmpeg will crash on them in
_some_ situations. On top of this, there was no API to allocate the
pseuo palette outside of av_frame_get_buffer().
This patch not only deprecates AV_PIX_FMT_FLAG_PSEUDOPAL, but also makes
the pseudo palette optional. Nothing accesses it anymore, though if it's
set, it's propagated. It's still allocated and initialized for
compatibility with API users that rely on this feature. But new API
users do not need to allocate it. This was an explicit goal of this
patch.
Most changes replace AV_PIX_FMT_FLAG_PSEUDOPAL with FF_PSEUDOPAL. I
first tried #ifdefing all code, but it was a mess. The FF_PSEUDOPAL
macro reduces the mess, and still allows defining FF_API_PSEUDOPAL to 0.
Passes FATE with FF_API_PSEUDOPAL enabled and disabled. In addition,
FATE passes with FF_API_PSEUDOPAL set to 1, but with allocation
functions manually changed to not allocating a palette.
This new side-data will contain info on how a packet is encrypted.
This allows the app to handle packet decryption.
Signed-off-by: Jacob Trimble <modmaker@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This adds a way for an API user to transfer QP data and metadata without
having to keep the reference to AVFrame, and without having to
explicitly care about QP APIs. It might also provide a way to finally
remove the deprecated QP related fields. In the end, the QP table should
be handled in a very similar way to e.g. AV_FRAME_DATA_MOTION_VECTORS.
There are two side data types, because I didn't care about having to
repack the QP data so the table and the metadata are in a single
AVBufferRef. Otherwise it would have either required a copy on decoding
(extra slowdown for something as obscure as the QP data), or would have
required making intrusive changes to the codecs which support export of
this data.
The new side data types are added under deprecation guards, because I
don't intend to change the status of the QP export as being deprecated
(as it was before this patch too).
Add av_sat_sub32 and av_sat_dsub32 as the subtraction analogues to
av_sat_add32/av_sat_dadd32.
Also clarify the formulas for dadd32/dsub32.
Signed-off-by: Andrew D'Addesio <modchipv12@gmail.com>
This was added for compatibility with libav, by leaving a space for
formats added in libav to be merged. Since that feature has been
removed, we don't need a gap here.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
The fields can be accessed directly, so these are not needed anymore.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>