av_mastering_display_metadata_alloc() is not useful in scenarios where you need to
know the runtime size of AVMasteringDisplayMetadata.
Signed-off-by: James Almer <jamrial@gmail.com>
* SMPTE ST 2128 IPT-C2 defines the coefficients utilized in DoVi
Profile 5. Profile 5 can thus now be represented in VUI as
{AVCOL_RANGE_JPEG, AVCOL_PRI_BT2020, AVCOL_TRC_SMPTE2084,
AVCOL_SPC_IPT_C2, AVCHROMA_LOC_LEFT} (although other chroma
sample locations are allowed). AVCOL_TRC_SMPTE2084 should in
this case be interpreted as 'PQ with reshaping'.
* YCgCo-Re and YCgCo-Ro define the bitexact YCgCo-R, where the
number of bits added to a source RGB bit depth is 2 (i.e., even)
and 1 (i.e., odd), respectively.
As well as accessors plus a function for allocating this struct with
extension blocks,
Definitions generously taken from quietvoid/dovi_tool, which is
assembled as a collection of various patent fragments, as well as output
by the official Dolby Vision bitstream verifier tool.
The NLQ pivots are not documented but should be present in the header
for profile 7 RPU format. It has been verified using Dolby's
verification toolkit.
Signed-off-by: quietvoid <tcChlisop0@gmail.com>
Signed-off-by: Niklas Haas <git@haasn.dev>
Yet another probesize used to get the durations when
estimate_timings_from_pts is required. It is aimed at users interested
in better durations probing for itself, or because using
avformat_find_stream_info indirectly and requiring exact values: for
concatdec for example, especially if streamcopying above it.
The current code is a performance trade-off that can fail to get video
stream durations in a scenario with high bitrates and buffering for
files ending cleanly (as opposed to live captures): the physical gap
between the last video packet and the last audio packet is very high in
such a case.
Default behaviour is unchanged: 250k up to 250k << 6 (step by step).
Setting this new option has two effects:
- override the maximum probesize (currently 250k << 6)
- reduce the number of steps to 1 instead of 6, this is to avoid
detecting the audio "too early" and failing to reach a video packet.
Even if a single audio stream duration is found but not the other
audio/video stream durations, there will be a retry, so at the end the
full user-overriden probesize will be used as expected by the user.
Signed-off-by: Nicolas Gaullier <nicolas.gaullier@cji.paris>
av_ts_make_time_string() used "%.6g" format, but this format was losing
precision even when the timestamp to be printed was not that large. For example
for 3 hours (10800) seconds, only 1 decimal digit was printed, which made this
format inaccurate when it was used in e.g. the silencedetect filter. Other
detection filters printing timestamps had similar issues. Also time base
parameter of the function was *AVRational instead of AVRational.
Resolve these problems by introducing a new function, av_ts_make_time_string2().
We change the used format to "%.*f", use a precision of 6, except when printing
values near 0, in which case we calculate the precision dynamically to aim for
a similar precision in normal form as with %.6g. No longer using scientific
representation can make parsing the timestamp easier for the users, we can
safely do this because the theoretical maximum of INT64_MAX*INT32_MAX still
fits into the string buffer in normal form.
We somewhat imitate %g by trimming ending zeroes and the potential decimal
point characters. In order not to trim "inf" as well, we assume that the
decimal point string does not contain the letter "f". Note that depending on
printf %f implementation, we might trim "infinity" to "inf".
Thanks for Allan Cady for bringing up this issue.
Signed-off-by: Marton Balint <cus@passwd.hu>
Common utility function that can be used by all codecs to select the
right (any valid) film grain parameter set. In particular, this is
useful for AFGS1, which has support for multiple parameters.
However, it also performs parameter validation for H274.
This is needed for AV1 film grain as well, when using AFGS1 streams.
Also add extra width/height and subsampling information, which AFGS1
cares about, as part of the same API bump. (And in principle, H274
should also expose this information, since it is needed downstream to
correctly adjust the chroma grain frequency to the subsampling ratio)
Deprecate the equivalent H274-exclusive fields. To avoid breaking ABI,
add the new fields after the union; but with enough of a paper trail to
hopefully re-order them on the next bump.
This will allow users to pass the Android ApplicationContext which is mandatory
to retrieve the ContentResolver responsible to resolve/open Android content URIS.
av_frame_side_data_get() has a const AVFrameSideData * const *sd
parameter; so calling it with an AVFramesSideData **sd like
AVCodecContext.decoded_side_data (or with a AVFramesSideData * const
*sd) is safe, but the conversion is not performed automatically
in C. All users of this function therefore resort to a cast.
This commit changes this: av_frame_side_data_get() is renamed
to av_frame_side_data_get_c(); furthermore, a static inline
wrapper for it name av_frame_side_data_get() is added
that accepts an AVFramesSideData * const * and converts this
to const AVFramesSideData * const * in a Wcast-qual safe way.
This also allows to remove the casts from the current users.
Reviewed-by: Jan Ekström <jeebjp@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The allocators have been superseded by av_vdpau_bind_context().
The latter have only been added "to allow multiple forks to add
fields to the structure without breaking ABI" [1], but libav
is no more, so this is not needed any longer.
[1]: https://ffmpeg.org/pipermail/ffmpeg-devel/2013-August/146954.html
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Export each tile as its own stream, and the grid information as a Stream Group
of type TILE_GRID.
This also enables exporting other stream items like thumbnails, which may be
present in non tiled HEIF images too. For those, the primary stream will be
tagged with the default disposition.
Based on a patch by Swaraj Hota
Signed-off-by: James Almer <jamrial@gmail.com>
It used to be used with preallocated packet buffers with
the old encode API, but said API is no more and therefore
there is no reason for this to be public any more.
So deprecate it and use an internal replacement
for the encoders using it as an upper bound for the
size of their headers.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The existing (and upcoming) available group types are meant to combine several
streams for presentation, with the result being treated as if it was a stream
itself.
For example, a file could export two stream groups of the same type with one of
them as the "default".
Signed-off-by: James Almer <jamrial@gmail.com>
This was accidentally removed in a prior revision of the series,
alongside the corresponding (separate) version bump. Instead coalesce it
into the follow-up commit's entry, since that's the lowest version
actually supporting the new fields.
Motivated by YUVJ removal. This change will allow full negotiation
between color ranges and matrices as needed. By default, all ranges and
matrices are marked as supported.
Because grayscale formats are currently handled very inconsistently (and
in particular, assumed as forced full-range by swscale), we exclude them
from negotiation altogether for the time being, to get this API merged.
After filter negotiation is available, we can relax the
grayscale-is-forced-jpeg restriction again, when it will be more
feasible to do so without breaking a million test cases.
Note that this commit updates one FATE test as a consequence of the
sanity fallback for non-YUV formats. In particular, the test case now
writes rgb24(pc, gbr/unspecified/unspecified) to the matroska file,
instead of rgb24(unspecified/unspecified/unspecified) as before.