Use the next I/P/B or start code as the end of current frame.
Before the patch, extension start code, user data start code,
sequence end code and so on are treated as the start of next
frame.
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Since this is an external encoder not under our control, we cannot test
the encoded output exactly as is done for internal encoders. We can
still test however that the output is decodable and produces the
expected number of frames with expected dimensions, pixel formats, and
timestamps.
Currently those are set in different ways depending on whether the
stream is decoded or not, using some values from the decoder if it is.
This is wrong, because there may be arbitrary amount of delay between
input packets and output frames (depending e.g. on the thread count when
frame threading is used).
Always use the path that was previously used only for streamcopy. This
should not cause any issues, because these values are now used only for
streamcopy and discontinuity handling.
This change will allow to decouple discontinuity processing from
decoding and move it to ffmpeg_demux. It also makes the code simpler.
Changes output in fate-cover-art-aiff-id3v2-remux and
fate-cover-art-mp3-id3v2-remux, where attached pictures are now written
in the correct order. This happens because InputStream.dts is no longer
reset to AV_NOPTS_VALUE after decoding, so streamcopy actually sees
valid dts values.
Stop using InputStream.dts for generating missing timestamps for decoded
frames, because it contains pre-decoding timestamps and there may be
arbitrary amount of delay between input packets and output frames (e.g.
dependent on the thread count when frame threading is used). It is also
in AV_TIME_BASE (i.e. microseconds), which may introduce unnecessary
rounding issues.
New code maintains a timebase that is the inverse of the LCM of all the
samplerates seen so far, and thus can accurately represent every audio
sample. This timebase is used to generate missing timestamps after
decoding.
Changes the result of the following FATE tests
* pcm_dvd-16-5.1-96000
* lavf-smjpeg
* adpcm-ima-smjpeg
In all of these the timestamps now better correspond to actual frame
durations.
Previously they would only be used with trivial filtergraphs, because
filters did not handle frame durations. That is no longer true - most
filters process frame durations properly (there may still be some that
don't - this change will help finding and fixing them).
Improves output video frame durations in a number of FATE tests.
Adds JPEG 2000 decoder tests using materials from the conformance suite specified in
Rec. ITU-T T.803 | ISO/IEC 15444-4.
The test materials are available at https://gitlab.com/wg1/htj2k-codestreams
Signed-off-by: Pierre-Anthony Lemieux <pal@palemieux.com>
When enable lcms2, the fate-png-icc-parse will get error bellow.
TEST png-icc-parse
This because updated how PNG handles colors (no
longer using mastering display metadata for that).
Reviewed-by: Leo Izen <leo.izen@gmail.com>
Signed-off-by: Steven Liu <liuqi05@kuaishou.com>
Remove now-obsolete code setting packet durations pre-muxing for CFR
encoded video.
Changes output in the following FATE tests:
* numerous adpcm tests
* ffmpeg-filter_complex_audio
* lavf-asf
* lavf-mkv
* lavf-mkv_attachment
* matroska-encoding-delay
All of these change due to the fact that the output duration is now
the actual input data duration and does not include padding added by
the encoder.
* apng-osample: less wrong packet durations are now passed to the muxer.
They are not entirely correct, because the first frame duration should
be 3 rather than 2. This is caused by the vsync code and should be
addressed later, but this change is a step in the right direction.
* tscc2-mov: last output frame has a duration of 11 rather than 1 - this
corresponds to the duration actually returned by the demuxer.
* film-cvid: video frame durations are now 2 rather than 1 - this
corresponds to durations actually returned by the demuxer and matches
the timestamps.
* mpeg2-ticket6677: durations of some video frames are now 2 rather than
1 - this matches the timestamps.
When no packet dts values are available from the container, video
decoding code will currently use its own guessed values, which will then
be propagated to pkt_dts on decoded frames and used as pts in certain
cases. This is inaccurate, fragile, and unnecessarily convoluted.
Simply removing this allows the extrapolation code introduced in the
previous commit to do a better job.
Changes the results of numerous h264 and hevc FATE tests, where no
spurious timestamp gaps are generated anymore. Several tests no longer
need -vsync passthrough.
When no timestamps are available from the container, the video decoding
code will currently use fake dts values - generated in
process_input_packet() based on a combination of information from the
decoder and the parser (obtained via the demuxer) - to generate
timestamps during decoder flushing. This is fragile, hard to follow, and
unnecessarily convoluted, since more reliable information can be
obtained directly from post-decoding values.
The new code keeps track of the last decoded frame pts and estimates its
duration based on a number of heuristics. Timestamps generated when both
pts and pkt_dts are missing are then simple pts+duration of the last frame.
The heuristics are somewhat complicated by the fact that lavf insists on
making up packet timestamps based on its highly incomplete information.
That should be removed in the future, allowing to further simplify this
code.
The results of the following tests change:
* h264-3386 now requires -fps_mode passthrough to avoid dropping frames
at the end; this is a pathology of the interaction of the new and old
code, and the fact that the sample switches from field to frame coding
in the last packet, and will be fixed in following commits
* hevc-conformance-DELTAQP_A_BRCM_4 stops inventing an arbitrary
timestamp gap at the end
* hevc-small422chroma - the single frame output by this test now has a
timestamp of 0, rather than an arbitrary 7
Changes the result of the h264_redundant_pps-mov test, where the output
timebase is now 1001/24000 instead of 1/24. This is more correct, as the
source file actually is 23.98fps.
Timestamps in two FATE H.264 conformance tests now start at 1 instead
of 0, which also happens in some other H.264 tests before this commit
and so is not a big issue.
Conversely, timestamps in some HEVC conformance tests start from a
smaller value now.
Ideally this should be addressed later in a more general way.
h264-conformance-frext-frext2_panasonic_b no longer requires -vsync
passthrough.
For audio AVFrames, nb_samples is typically more trustworthy than
duration. Since sync queues look at durations, make sure they match the
sample count.
The last audio frame in the fate-shortest test is now gone. This is more
correct, since it outlasts the last video frame.
These fields are ad-hoc and will be deprecated. Use the recently-added
AV_CODEC_FLAG_COPY_OPAQUE to pass arbitrary user data from packets to
frames.
Changes the result of the flcl1905 test, which uses ffprobe to decode
wmav2 with multiple frames per packet. Such packets are handled
internally by calling the decoder's decode callback multiple times,
offsetting the internal packet's data pointer and decreasing its size
after each call. The output pkt_size value before this commit is then
the remaining internal packet size at the time of each internal decode
call.
After this commit, output pkt_size is simply the size of the full packet
submitted by the caller to the decoder. This is more correct, since
internal packets are never seen by the caller and should have no
observable outside effects.
ISOBMFF (14496-12) made this field ('channelcount') in the
AudioSampleEntry structure non-template¹ somewhere before the
release of the 2022 edition. As for ETSI TS 126 244 AKA 3GPP
file format (V16.1.0, 2020-10), it does not seem contain any
references limiting the channelcount entry in AudioSampleEntry
or in its own definition of EVSSampleEntry.
fate-mov-mp4-chapters test had to be adjusted as it output a
mono vorbis stream, which would now be properly marked as such
in the container.
1: As per 14496-12:
Fields shown as “template” in the box descriptions are fields
which are coded with a default value unless a derived
specification defines their use and permits writers to use
other values than the default.
Splits the currently handled subtitle at random access point
packets that can be configured to follow a specific output stream.
Currently only subtitle streams which are directly mapped into the
same output in which the heartbeat stream resides are affected.
This way the subtitle - which is known to be shown at this time
can be split and passed to muxer before its full duration is
yet known. This is also a drawback, as this essentially outputs
multiple subtitles from a single input subtitle that continues
over multiple random access points. Thus this feature should not
be utilized in cases where subtitle output latency does not matter.
Co-authored-by: Andrzej Nadachowski <andrzej.nadachowski@24i.com>
Co-authored-by: Bernard Boulay <bernard.boulay@24i.com>
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
The cHRM chunk is descriptive. That is, it describes the primaries that should
be used to interpret the pixel data in the PNG file. This is notably different
from Mastering Display Metadata, which describes which subset of the presented
gamut is relevant. MDM describes a gamut and says colors outside the gamut are
not required to be preserved, but it does not actually describe the gamut that
the pixel data from the frame resides in. Thus, to decode a cHRM chunk present
in a PNG file to Mastering Display Metadata is incorrect.
This commit changes this behavior so the cHRM chunk, if present, is decoded to
color metadata. For example, if the cHRM chunk describes BT.709 primaries, the
resulting AVFrame will be tagged with AVCOL_PRI_BT709, as a description of its
pixel data. To do this, it utilizes libavutil/csp.h, which exposes a funcction
av_csp_primaries_id_from_desc, to detect which enum value accurately describes
the white point and primaries represented by the cHRM chunk.
This commit also changes pngenc.c to utilize the libavuitl/csp.h API, since it
previously duplicated code contained in that API. Instead, taking advantage of
the API that exists makes more sense. pngenc.c does properly utilize the color
tags rather than incorrectly using MDM, so that required no change.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
segment_time and segment_times are defined as duration specifications, not
timestamps, so calculation of segment duration must account for initial
timestamp. Fixed.
FATE ref for segment-mp4-to-ts changed on account of avoiding premature
segment cut at the end of the first segment.
Defined by H.274, this SEI message is utilized by iPhones to save
the nominal ambient viewing environment for the display of recorded
HDR content. The contents of the message are exposed to API users
as AVFrame side data containing AVAmbientViewingEnvironment.
As the DV RPU test sample is from an iPhone and includes Ambient
Viewing Environment SEI messages, its test result gets updated.
Parsing should probably be enabled for all codecs, at least for headers,
but e.g. the AAC parser produces 1-byte packets of zero padding with it,
so I'm just enabling it for EAC3 for the moment.
Current code may, depending on the muxer, decide to use VSYNC_VFR tagged
with the specified framerate, without actually performing framerate
conversion. This is clearly wrong and against the documentation, which
states unambiguously that -r should produce CFR output for video
encoding.
FATE test changes:
* nuv-rtjpeg: replace -r with '-enc_time_base -1', which keeps the
original timebase. Output frames are now produced with proper
durations.
* filter-mpdecimate: just drop the -r option, it is unnecessary
* filter-fps-r: remove, this test makes no sense and actually
produces broken VFR output (with incorrect frame durations).