Yet another probesize used to get the durations when
estimate_timings_from_pts is required. It is aimed at users interested
in better durations probing for itself, or because using
avformat_find_stream_info indirectly and requiring exact values: for
concatdec for example, especially if streamcopying above it.
The current code is a performance trade-off that can fail to get video
stream durations in a scenario with high bitrates and buffering for
files ending cleanly (as opposed to live captures): the physical gap
between the last video packet and the last audio packet is very high in
such a case.
Default behaviour is unchanged: 250k up to 250k << 6 (step by step).
Setting this new option has two effects:
- override the maximum probesize (currently 250k << 6)
- reduce the number of steps to 1 instead of 6, this is to avoid
detecting the audio "too early" and failing to reach a video packet.
Even if a single audio stream duration is found but not the other
audio/video stream durations, there will be a retry, so at the end the
full user-overriden probesize will be used as expected by the user.
Signed-off-by: Nicolas Gaullier <nicolas.gaullier@cji.paris>
There are lots of files that don't need it: The number of object
files that actually need it went down from 2011 to 884 here.
Keep it for external users in order to not cause breakages.
Also improve the other headers a bit while just at it.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Depending on the filters used, the filtergraph may produce trailing data
after feeding it the last input frame. Update the example to include the
necessary loop for draining the filtergraph.
Reviewed-by: Stefano Sabatini <stefasab@gmail.com>
Signed-off-by: Tobias Rapp <t.rapp@noa-archive.com>
Depending on the filters used, the filtergraph may produce trailing data
after feeding it the last input frame. Update the example to include the
necessary loop for draining the filtergraph.
Reviewed-by: Stefano Sabatini <stefasab@gmail.com>
Signed-off-by: Tobias Rapp <t.rapp@noa-archive.com>
av_ts_make_time_string() used "%.6g" format, but this format was losing
precision even when the timestamp to be printed was not that large. For example
for 3 hours (10800) seconds, only 1 decimal digit was printed, which made this
format inaccurate when it was used in e.g. the silencedetect filter. Other
detection filters printing timestamps had similar issues. Also time base
parameter of the function was *AVRational instead of AVRational.
Resolve these problems by introducing a new function, av_ts_make_time_string2().
We change the used format to "%.*f", use a precision of 6, except when printing
values near 0, in which case we calculate the precision dynamically to aim for
a similar precision in normal form as with %.6g. No longer using scientific
representation can make parsing the timestamp easier for the users, we can
safely do this because the theoretical maximum of INT64_MAX*INT32_MAX still
fits into the string buffer in normal form.
We somewhat imitate %g by trimming ending zeroes and the potential decimal
point characters. In order not to trim "inf" as well, we assume that the
decimal point string does not contain the letter "f". Note that depending on
printf %f implementation, we might trim "infinity" to "inf".
Thanks for Allan Cady for bringing up this issue.
Signed-off-by: Marton Balint <cus@passwd.hu>
Common utility function that can be used by all codecs to select the
right (any valid) film grain parameter set. In particular, this is
useful for AFGS1, which has support for multiple parameters.
However, it also performs parameter validation for H274.
This is needed for AV1 film grain as well, when using AFGS1 streams.
Also add extra width/height and subsampling information, which AFGS1
cares about, as part of the same API bump. (And in principle, H274
should also expose this information, since it is needed downstream to
correctly adjust the chroma grain frequency to the subsampling ratio)
Deprecate the equivalent H274-exclusive fields. To avoid breaking ABI,
add the new fields after the union; but with enough of a paper trail to
hopefully re-order them on the next bump.
This will allow users to pass the Android ApplicationContext which is mandatory
to retrieve the ContentResolver responsible to resolve/open Android content URIS.
av_frame_side_data_get() has a const AVFrameSideData * const *sd
parameter; so calling it with an AVFramesSideData **sd like
AVCodecContext.decoded_side_data (or with a AVFramesSideData * const
*sd) is safe, but the conversion is not performed automatically
in C. All users of this function therefore resort to a cast.
This commit changes this: av_frame_side_data_get() is renamed
to av_frame_side_data_get_c(); furthermore, a static inline
wrapper for it name av_frame_side_data_get() is added
that accepts an AVFramesSideData * const * and converts this
to const AVFramesSideData * const * in a Wcast-qual safe way.
This also allows to remove the casts from the current users.
Reviewed-by: Jan Ekström <jeebjp@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The documentation correctly states that the rdiv is a multiplier but incorrectly states the default behavior is to multiply by the sum of all matrix elements - it multiplies by 1/sum.
This changes the documentation to match the code.
Address trac #10889
Signed-off-by: Marton Balint <cus@passwd.hu>
The samples I found all have 2000 sample packets, and by forcing the packet
size with a bsf we could automagically make muxing work for packets containing
more than 3640 samples.
Signed-off-by: Marton Balint <cus@passwd.hu>