av_adler32_update() is used by av_hash_update() which will be switched
to size_t at the next bump. So it also has to be made to use size_t.
This is also necessary for framecrcenc.c, because the size of side data
will become a size_t, too.
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Base escaping only escapes values required for base character data
according to part 2.4 of XML, and if additional flags are added
single and double quotes can additionally be escaped in order
to handle single and double quoted attributes.
Co-authored-by: Jan Ekström <jan.ekstrom@24i.com>
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
This commit adds support for in-place FFT transforms. Since our
internal transforms were all in-place anyway, this only changes
the permutation on the input.
Unfortunately, research papers were of no help here. All focused
on dry hardware implementations, where permutes are free, or on
software implementations where binary bloat is of no concern so
storing dozen times the transforms for each permutation and version
is not considered bad practice.
Still, for a pure C implementation, it's only around 28% slower
than the multi-megabyte FFTW3 in unaligned mode.
Unlike a closed permutation like with PFA, split-radix FFT bit-reversals
contain multiple NOPs, multiple simple swaps, and a few chained swaps,
so regular single-loop single-state permute loops were not possible.
Instead, we filter out parts of the input indices which are redundant.
This allows for a single branch, and with some clever AVX512 asm,
could possibly be SIMD'd without refactoring.
The inplace_idx array is guaranteed to never be larger than the
revtab array, and in practice only requires around log2(len) entries.
The power-of-two MDCTs can be done in-place as well. And it's
possible to eliminate a copy in the compound MDCTs too, however
it'll be slower than doing them out of place, and we'd need to dirty
the input array.
This patch adds support for arbitrary-point FFTs and all even MDCT
transforms.
Odd MDCTs are not supported yet as they're based on the DCT-II and DCT-III
and they're very niche.
With this we can now write tests.
Do it only when requested with the AV_CODEC_EXPORT_DATA_VIDEO_ENC_PARAMS
flag.
Drop previous code using the long-deprecated AV_FRAME_DATA_QP_TABLE*
API. Temporarily disable fate-filter-pp, fate-filter-pp7,
fate-filter-spp. They will be reenabled once these filters are converted
in following commits.
This patch introduces a new frame side data type AVFilmGrainParams for use
with video codecs which support it.
It can save a lot of memory used for duplicate processed reference frames and
reduce copies when applying film grain during presentation.
A common pattern e.g. in libavcodec is replacing/updating buffer
references: unref old one, ref new one. This function allows simplifying
such code and avoiding unnecessary refs+unrefs if the references are
already equivalent.
Requires some extraneous top side and bottom front channels to be
defined.
According to STD-B59v2, the defined channel layout is:
- FL
- FR
- FC
- LFE1
- BL
- BR
- FLc
- FRc
- BC
- LFE2
- SiL
- SiR
- TpFL
- TpFR
- TpFC
- TpC
- TpBL
- TpBR
- TpSiL
- TpSiR
- TpBC
- BtFC
- BtFL
- BtFR
This utility helps avoid undefined behavior when doing things like
checking how much memory we need to allocate for an image before we have
allocated a buffer.
Signed-off-by: Brian Kim <bkkim@google.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Use opaque iteration state instead of the previous child class. This
mirrors similar changes done in lavf/lavc.
Deprecate the av_opt_child_class_next() API.
This allows for users who derive devices to set options for the
new device context they derive.
The main use case of this is to allow users to enable extensions
(such as surface drawing extensions) in Vulkan while deriving from
the device their frames are on. That way, users don't need to write
any initialization code themselves, since the Vulkan spec invalidates
mixing instances, physical devices and active devices.
Apart from Vulkan, other hwcontexts ignore the opts argument since they
don't support options at all (or in VAAPI and OpenCL's case, options are
currently only used for device selection, which device_derive overrides).
This will be used for AVCodecContext->profile. By specifying constants in the
encoders we won't have to use the common AVCodecContext options table and
different encoders can use the same profile name even with different values.
Signed-off-by: Marton Balint <cus@passwd.hu>
This is intended to replace the deprecated the AV_FRAME_DATA_QP_TABLE*
API and extend it to a wider range of codecs.
In the future, it may also be extended to support other encoding
parameters such as motion vectors.
Additional changes by Anton Khirnov <anton@khirnov.net> with suggestions
by Lynne <dev@lynne.ee>.
Signed-off-by: Juan De León <juandl@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This solves a huge oversight - it lets users reliably use their own
AVVulkanDeviceContext. Otherwise, the extensions supplied and enabled
are not discoverable by anything outside of hwcontext_vulkan.
Also clarifies that any user-supplied VkInstance must be at least 1.1.
bump minor version for DOVI sidedata, because added the dovi_meta.h
as lavu API part. Also update APIchanges.
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
Required minimal changes to the code so made sense to implement.
FFT and MDCT tested, the output of both was properly rounded.
Fun fact: the non-power-of-two fixed-point FFT and MDCT are the fastest ever
non-power-of-two fixed-point FFT and MDCT written.
This can replace the power of two integer MDCTs in aac and ac3 if the
MIPS optimizations are ported across.
Unfortunately the ac3 encoder uses a 16-bit fixed point forward transform,
unlike the encoder which uses a 32bit inverse transform, so some modifications
might be required there.
The 3-point FFT is somewhat less accurate than it otherwise could be,
having minor rounding errors with bigger transforms. However, this
could be improved later, and the way its currently written is the way one
would write assembly for it.
Similar rounding errors can also be found throughout the power of two FFTs
as well, though those are more difficult to correct.
Despite this, the integer transforms are more than accurate enough.
Compared to ad-hoc if(printed) ... code this allows the user to disable
it by adjusting the log level
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
In order to access the original opaque parameter of a buffer in the buffer
pool. (The buffer pool implementation overrides the normal opaque parameter but
also saves it so it is accessible).
v2: add assertion check before dereferencing the BufferPoolEntry.
Signed-off-by: Marton Balint <cus@passwd.hu>
1)Some filters allow cross-referenced expressions e.g. x=y+10. In
such cases, filters evaluate expressions multiple times for
successful evaluation of all expressions. If the expression for one or
more variables contains a RNG, the result may vary across evaluation
leading to inconsistent values across the cross-referenced expressions.
2)A related case is circular expressions e.g. x=y+10 and y=x+10 which
cannot be succesfully resolved.
3)Certain filter variables may only be applicable in specific eval modes
and lead to a failure of evaluation in other modes e.g. pts is only
relevant for frame eval mode.
At present, there is no reliable means to identify these occurrences and
thus the error messages provided are broad or inaccurate. The helper
function introduced - av_expr_count_vars - allows developers to identify
the use and count of variables in expressions and thus tailor the error
message, allow for a graceful fallback and/or decide evaluation order.
This is an alias for JEDEC P22.
The name associated with the value is also changed
from jedec-p22 to ebu3213 to match ITU-T H.273.
Signed-off-by: Raphaël Zumer <rzumer@tebako.net>
Signed-off-by: James Almer <jamrial@gmail.com>
Simply moves and templates the actual transforms to support an
additional data type.
Unlike the float version, which is equal or better than libfftw3f,
double precision output is bit identical with libfftw3.
FF_DECODE_ERROR_CONCEALMENT_ACTIVE is set when the decoded frame has error(s) but the returned value from
avcodec_receive_frame is zero i.e. concealed errors
Signed-off-by: Amir Pauker <amir@livelyvideo.tv>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
These are the 4:4:4 variants of the semi-planar NV12/NV21 formats.
These formats are not used much, so we've never had a reason to add
them until now. VDPAU recently added support HEVC 4:4:4 content
and when you use the OpenGL interop, the returned surfaces are in
NV24 format, so we need the pixel format for media players, even
if there's no direct use within ffmpeg.
Separately, there are apparently webcams that use NV24, but I've
never seen one.
New VdpYCbCr Formats VDP_YCBCR_FORMAT_Y_U_V_444 and,
VDP_YCBCR_FORMAT_Y_UV_444 have been added in VDPAU with libvdpau-1.2
to be used in get/putbits for YUV 4:4:4 surfaces. Earlier mapping of
AV_PIX_FMT_YUV444P to VDP_YCBCR_FORMAT_YV12 is not valid.
Hence this Change maps AV_PIX_FMT_YUV444P to VDP_YCBCR_FORMAT_Y_U_V_444
to access the YUV 4:4:4 surface via read-back API's of VDPAU.
The encoders such as libx264 support different QPs offset for different MBs,
it makes possible for ROI-based encoding. It makes sense to add support
within ffmpeg to generate/accept ROI infos and pass into encoders.
Typical usage: After AVFrame is decoded, a ffmpeg filter or user's code
generates ROI info for that frame, and the encoder finally does the
ROI-based encoding.
The ROI info is maintained as side data of AVFrame.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
The dynamic metadata contains data for color volume transform -
application 4 of SMPTE 2094-40:2016 standard. The data comes from
HEVC in the SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This was marked as deprecated (but only in the doxygen, not with an
actual deprecation attribute) in 81c623fae0 in 2011, but was
undeprecated in ad1ee5fa7.
Signed-off-by: Martin Storsjö <martin@martin.st>
This is needed because of 32bit float formats (which are difficult to
store in 16bits)
This also fixes undefined behavior found by fate
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
PSEUDOPAL pixel formats are not paletted, but carried a palette with the
intention of allowing code to treat unpaletted formats as paletted. The
palette simply mapped the byte values to the resulting RGB values,
making it some sort of LUT for RGB conversion.
It was used for 1 byte formats only: RGB4_BYTE, BGR4_BYTE, RGB8, BGR8,
GRAY8. The first 4 are awfully obscure, used only by some ancient bitmap
formats. The last one, GRAY8, is more common, but its treatment is
grossly incorrect. It considers full range GRAY8 only, so GRAY8 coming
from typical Y video planes was not mapped to the correct RGB values.
This cannot be fixed, because AVFrame.color_range can be freely changed
at runtime, and there is nothing to ensure the pseudo palette is
updated.
Also, nothing actually used the PSEUDOPAL palette data, except xwdenc
(trivially changed in the previous commit). All other code had to treat
it as a special case, just to ignore or to propagate palette data.
In conclusion, this was just a very strange old mechnaism that has no
real justification to exist anymore (although it may have been nice and
useful in the past). Now it's an artifact that makes the API harder to
use: API users who allocate their own pixel data have to be aware that
they need to allocate the palette, or FFmpeg will crash on them in
_some_ situations. On top of this, there was no API to allocate the
pseuo palette outside of av_frame_get_buffer().
This patch not only deprecates AV_PIX_FMT_FLAG_PSEUDOPAL, but also makes
the pseudo palette optional. Nothing accesses it anymore, though if it's
set, it's propagated. It's still allocated and initialized for
compatibility with API users that rely on this feature. But new API
users do not need to allocate it. This was an explicit goal of this
patch.
Most changes replace AV_PIX_FMT_FLAG_PSEUDOPAL with FF_PSEUDOPAL. I
first tried #ifdefing all code, but it was a mess. The FF_PSEUDOPAL
macro reduces the mess, and still allows defining FF_API_PSEUDOPAL to 0.
Passes FATE with FF_API_PSEUDOPAL enabled and disabled. In addition,
FATE passes with FF_API_PSEUDOPAL set to 1, but with allocation
functions manually changed to not allocating a palette.
This new side-data will contain info on how a packet is encrypted.
This allows the app to handle packet decryption.
Signed-off-by: Jacob Trimble <modmaker@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This adds a way for an API user to transfer QP data and metadata without
having to keep the reference to AVFrame, and without having to
explicitly care about QP APIs. It might also provide a way to finally
remove the deprecated QP related fields. In the end, the QP table should
be handled in a very similar way to e.g. AV_FRAME_DATA_MOTION_VECTORS.
There are two side data types, because I didn't care about having to
repack the QP data so the table and the metadata are in a single
AVBufferRef. Otherwise it would have either required a copy on decoding
(extra slowdown for something as obscure as the QP data), or would have
required making intrusive changes to the codecs which support export of
this data.
The new side data types are added under deprecation guards, because I
don't intend to change the status of the QP export as being deprecated
(as it was before this patch too).
Add av_sat_sub32 and av_sat_dsub32 as the subtraction analogues to
av_sat_add32/av_sat_dadd32.
Also clarify the formulas for dadd32/dsub32.
Signed-off-by: Andrew D'Addesio <modchipv12@gmail.com>
This was added for compatibility with libav, by leaving a space for
formats added in libav to be merged. Since that feature has been
removed, we don't need a gap here.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
The fields can be accessed directly, so these are not needed anymore.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
* commit '07a2b155949eb267cdfc7805f42c7b3375f9c7c5':
Bump major versions of all libraries
A few API deprecated ~2 years ago or more are also postponed here for
varying reasons.
FF_API_LOWRES:
Since this functionality depends on AVStream->codec, i figure the two can
be removed at the same time in the next bump or so.
FF_API_AVCTX_TIMEBASE:
Couldn't get this one to work. Not just libavcodec but apparently also
libavformat and ffmpeg.c expect AVCodecContext->time_base to be set for
decoding. Upon removal some tests report a different generic stream time
base (like 1/25), and others lose packet duration values. I guess it's
somehow tied to the AVStream->codec clusterfuck.
It can be dealt with alongside FF_API_LAVF_AVCTX in the next bump.
FF_API_OLD_FILTER_OPTS_ERROR:
This one is meant to remain after FF_API_OLD_FILTER_OPTS is removed.
Its purpose is displaying the corrected command line using the new syntax
as a suggestion as part of the error message.
Merged-by: James Almer <jamrial@gmail.com>
* commit 'e6bff23f1e11aefb16a2b5d6ee72bf7469c5a66e':
cpu: add a function for querying maximum required data alignment
Adapted to work with the arbitrary runtime cpuflag changes av_force_cpu_flags()
can generate.
Merged-by: James Almer <jamrial@gmail.com>
Black isn't always just memset(ptr, 0, size). Limited YUV in particular
requires relatively non-obvious values, and filling a frame with
repeating 0 bytes is disallowed in some contexts. With component sizes
larger than 8 or packed YUV, this can become relatively complicated. So
having a generic function for this seems helpful.
In order to handle the complex cases in a generic way without destroying
performance, this code attempts to compute a black pixel, and then uses
that value to clear the image data quickly by using a function like
memset.
Common cases like yuv410p10 or rgba can't be handled with a simple
memset, so there is some code to fill memory with 2/4/8 byte patterns.
For the remaining cases, a generic slow fallback is used.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Merged from Libav commit 45df7adc1d.
Black isn't always just memset(ptr, 0, size). Limited YUV in particular
requires relatively non-obvious values, and filling a frame with
repeating 0 bytes is disallowed in some contexts. With component sizes
larger than 8 or packed YUV, this can become relatively complicated. So
having a generic function for this seems helpful.
In order to handle the complex cases in a generic way without destroying
performance, this code attempts to compute a black pixel, and then uses
that value to clear the image data quickly by using a function like
memset.
Common cases like yuv410p10 or rgba can't be handled with a simple
memset, so there is some code to fill memory with 2/4/8 byte patterns.
For the remaining cases, a generic slow fallback is used.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Many image formats support embedding of ICC profiles directly in
their bitstreams. Add a new side data type to allow exposing them to
API users.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
Rework it to improve performance. Now mutex is not shared by workers,
instead each worker has its own mutex and condition variable. This
reduces lock contention between workers. Also use atomic variable for
counter.
The interface also allows execute to run special function on main
thread, requested by Ronald.
Signed-off-by: Muhammad Faiz <mfcc64@gmail.com>
To be used with the new d3d11 hwaccel decode API.
With the new hwaccel API, we don't want surfaces to depend on the
decoder (other than the required dimension and format). The old D3D11VA
pixfmt uses ID3D11VideoDecoderOutputView pointers, which include the
decoder configuration, and thus is incompatible with the new hwaccel
API. This patch introduces AV_PIX_FMT_D3D11, which uses ID3D11Texture2D
and an index. It's simpler and compatible with the new hwaccel API.
The introduced hwcontext supports only the new pixfmt.
Frame upload code untested.
Significantly based on work by Steve Lhomme <robux4@gmail.com>, but with
heavy changes/rewrites.
Merges Libav commit fff90422d1.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
Use the flags argument of av_hwframe_ctx_create_derived() to pass the
mapping flags which will be used on allocation. Also, set the format
and hardware context on the allocated frame automatically - the user
should not be required to do this themselves.
(cherry picked from commit c5714b51aa)
Adds functions to convert to/from strings and a function to iterate
over all supported device types. Also adds a new invalid type
AV_HWDEVICE_TYPE_NONE, which acts as a sentinel value.
(cherry picked from commit b7487f4f3c)
To be used with the new d3d11 hwaccel decode API.
With the new hwaccel API, we don't want surfaces to depend on the
decoder (other than the required dimension and format). The old D3D11VA
pixfmt uses ID3D11VideoDecoderOutputView pointers, which include the
decoder configuration, and thus is incompatible with the new hwaccel
API. This patch introduces AV_PIX_FMT_D3D11, which uses ID3D11Texture2D
and an index. It's simpler and compatible with the new hwaccel API.
The introduced hwcontext supports only the new pixfmt.
Frame upload code untested.
Significantly based on work by Steve Lhomme <robux4@gmail.com>, but with
heavy changes/rewrites.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
This adds tons of code for no other benefit than making VideoToolbox
support conform with the new hwaccel API (using hw_device_ctx and
hw_frames_ctx).
Since VideoToolbox decoding does not actually require the user to
allocate frames, the new code does mostly nothing.
One benefit is that ffmpeg_videotoolbox.c can be dropped once generic
hwaccel support for ffmpeg.c is merged from Libav.
Does not consider VDA or VideoToolbox encoding.
Fun fact: the frame transfer functions are copied from vaapi, as the
mapping makes copying generic boilerplate. Mapping itself is not
exported by the VT code, because I don't know how to test.
* commit 'e435beb1ea5380a90774dbf51fdc8c941e486551':
crypto: consistently use size_t as type for length parameters
Merged-by: Clément Bœsch <cboesch@gopro.com>
Use the flags argument of av_hwframe_ctx_create_derived() to pass the
mapping flags which will be used on allocation. Also, set the format
and hardware context on the allocated frame automatically - the user
should not be required to do this themselves.
This disables everything that was deprecated at least 18 months ago.
Readjust the minimum API version as needed, postponing any
API-incompatible changes until the next bump.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Adds functions to convert to/from strings and a function to iterate
over all supported device types. Also adds a new invalid type
AV_HWDEVICE_TYPE_NONE, which acts as a sentinel value.
* commit 'd7bc52bf456deba0f32d9fe5c288ec441f1ebef5':
imgutils: add a function for copying image data from GPU mapped memory
Merged-by: Clément Bœsch <u@pkh.me>
Allow all struct fields to be accessed directly, as long as they're
public.
Before this change, many fields were "public", but could be accessed via
AVOption only. This meant they were effectively not public, but were
present for documentation purposes, which was incredibly confusing at
best.
This is an extended version of the AVFrame.opaque field, which can be
used to attach arbitrary user information to an AVFrame.
The usefulness of the opaque field is rather limited, because it can
store only up to 32 bits of information (or 64 bit on 64 bit systems).
It's not possible to set this field to a memory allocation, because
there is no way to deallocate it correctly.
The opaque_ref field circumvents this by letting the user set an
AVBuffer, which makes the user data refcounted.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Merges Libav commit 04f3bd3496.
This is an extended version of the AVFrame.opaque field, which can be
used to attach arbitrary user information to an AVFrame.
The usefulness of the opaque field is rather limited, because it can
store only up to 32 bits of information (or 64 bit on 64 bit systems).
It's not possible to set this field to a memory allocation, because
there is no way to deallocate it correctly.
The opaque_ref field circumvents this by letting the user set an
AVBuffer, which makes the user data refcounted.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
* commit '7d7355aa92bb36ca0765c49a569a999bcb96f332':
x86: Add SSSE3_SLOW CPU flag and related convenience macros
Merged-by: James Almer <jamrial@gmail.com>
Return a channel layout and the number of channels based on the specified name.
This function is similar to av_get_channel_layout(), but can also parse unknown
channel layout specifications.
Unknown channel layout specifications are a decimal number and a capital 'C'
suffix, in order to not break compatibility with the lowercase 'c' suffix,
which is used for a guessed channel layout with the specified number of
channels.
Signed-off-by: Marton Balint <cus@passwd.hu>
While no decoder currently exports spherical information, this type
represents a frame property that has to be passed through from container
to frames.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
While no decoder currently exports spherical information, this type
represents a frame property that has to be passed through from container
to frames.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
P016 is the 16-bit variant of NV12 (planar luma, packed chroma), using
two bytes per component.
It may, and in fact is most likely to, be used in situations where
there are less than 16 bits of data. It is the responsibility of
the writer to zero out any unused LSBs.
The Intel binary iHD driver does not support the
VASurfaceAttribMemoryType, so surface allocation will fail when using
it.
(cherry picked from commit 2124711b95)
The driver being used is detected inside av_hwdevice_ctx_init() and
the quirks field then set from a table of known device. If this
behaviour is unwanted, the user can also set the quirks field
manually.
Also adds the Intel i965 driver quirk (it does not destroy parameter
buffers used in a call to vaRenderPicture()) and detects that driver
to set it.
(cherry picked from commit 4926fa9a4a)
Adds the new av_hwframe_map() function, which allows mapping between
hardware frames and normal memory, along with internal support for
implementing it.
Also adds av_hwframe_ctx_create_derived(), for creating a hardware
frames context associated with one device using frames mapped from
another by some hardware-specific means.
* commit '32c8359093d1ff4f45ed19518b449b3ac3769d27':
lavc: export the timestamps when decoding in AVFrame.pts
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
The driver being used is detected inside av_hwdevice_ctx_init() and
the quirks field then set from a table of known device. If this
behaviour is unwanted, the user can also set the quirks field
manually.
Also adds the Intel i965 driver quirk (it does not destroy parameter
buffers used in a call to vaRenderPicture()) and detects that driver
to set it.
P010 is the 10-bit variant of NV12 (planar luma, packed chroma), using two
bytes per component to store 10-bit data plus 6-bit zeroes in the LSBs.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This fixes part of Ticket5676
This fixes kodi, mpv, chromium and ffplay build against 3.0 and linked to 3.1
This is a similar ABI fix to 1eb43af1a0
Approved-by: BBB
Approved-by: jamrial
Approved-by: BtbN
Approved-by: nevcairiel
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* commit '0c4468dc185fa8b9e7d6add914595c5e928b24fd':
stereo3d: Add API to get name from value or value from name
Merged-by: Clément Bœsch <clement@stupeflix.com>
Currently it's exported as AVFrame.pkt_pts, which is also the only use
for that field. The reason it is done like this is that lavc used to
export various codec-specific "timing" information in AVFrame.pts, which
is not done anymore.
Since it is confusing to the callers to have a separate field which is
used only for decoder timestamps and nothing else, deprecate pkt_pts and
use just AVFrame.pts everywhere.