AVID streams - currently handled by the AVRN decoder - can be (depending
on extradata contents) either MJPEG or raw video. To decode the MJPEG
variant, the AVRN decoder currently instantiates a MJPEG decoder
internally and forwards decoded frames to the caller (possibly after
cropping them).
This is suboptimal, because the AVRN decoder does not forward all the
features of the internal MJPEG decoder, such as direct rendering.
Handling such forwarding in a full and generic manner would be quite
hard, so it is simpler to just handle those streams in the MJPEG decoder
directly.
The AVRN decoder, which now handles only the raw streams, can now be
marked as supporting direct rendering.
This also removes the last remaining internal use of the obsolete
decoding API.
This provides coverage for writing BlockGroups with BlockAdditional
and ReferenceBlock elements. It also tests setting the hearing impaired
disposition (it fits given that this video has no audio so one needs to
be able to read lips to understand anything).
Reviewed-by: Ridley Combs <rcombs@rcombs.me>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The FATE suite already contains a file containing mastering display
and content light level metadata: Meridian-Apple_ProResProxy-HDR10.mxf
This file is used to test both the Matroska muxer and demuxer.
Reviewed-by: Ridley Combs <rcombs@rcombs.me>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The mxf_d10 muxer is very picky regarding the input it accepts:
The only video accepted is MPEG-2 with absolutely constant bitrate,
i.e. all packets need to have exactly the same size; and only a few
bitrates are accepted.
The sample file used did not abide by this: Writing the first packet
(a video packet) errors out and afterwards an audio packet from the
muxing queue has been written. That's all besides metadata (which this
test is about). The FFmpeg cli returned an error, but said error has
been ignored by the md5 test.
This commit changes the test to actually send a compliant stream to the
muxer, so that it does not error out; furthermore, the test is changed
to explicitly check the metadata instead of it only being implicitly
included in the md5 checksum. The compliant stream is created by our
encoder at runtime.
Finally, the test now also covers writing user-specified
product/company/version identification.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Also, test modifying colorspace properties and the default_mode
passthrough which is used here to create a file that has no default
track at all.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
It furthermore tests the demuxer's handling of chained SeekHeads,
level 1-elements after the Clusters and the muxer's capability of
writing huge TrackNumbers as well as expanding the Cues' length field
by one byte if necessary to fill the reserved space. It also tests
propagation of metadata.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
No longer used by anything.
Unfortunately the old FFT_FLOAT/FFT_FIXED_32 is left as-is. It's
simply too much work for code meant to be all removed anyway.
The AC3 encoder used to be a separate library called "Aften", which
got merged into libavcodec (literally, SVN commits and all).
The merge preserved as much features from the library as possible.
The code had two versions - a fixed point version and a floating
point version. FFmpeg had floating point DSP code used by other
codecs, the AC3 decoder including, so the floating-point DSP was
simply replaced with FFmpeg's own functions.
However, FFmpeg had no fixed-point audio code at that point. So
the encoder brought along its own fixed-point DSP functions,
including a fixed-point MDCT.
The fixed-point MDCT itself is trivially just a float MDCT with a
different type and each multiply being a fixed-point multiply.
So over time, it got refactored, and the FFT used for all other codecs
was templated.
Due to design decisions at the time, the fixed-point version of the
encoder operates at 16-bits of precision. Although convenient, this,
even at the time, was inadequate and inefficient. The encoder is noisy,
does not produce output comparable to the float encoder, and even
rings at higher frequencies due to the badly approximated winow function.
Enter MIPS (owned by Imagination Technologies at the time). They wanted
quick fixed-point decoding on their FPUless cores. So they contributed
patches to template the AC3 decoder so it had both a fixed-point
and a floating-point version. They also did the same for the AAC decoder.
They however, used 32-bit samples. Not 16-bits. And we did not have
32-bit fixed-point DSP functions, including an MDCT. But instead of
templating our MDCT to output 3 versions (float, 32-bit fixed and 16-bit fixed),
they simply copy-pasted their own MDCT into ours, and completely
ifdeffed our own MDCT code out if a 32-bit fixed point MDCT was selected.
This is also the status quo nowadays - 2 separate MDCTs, one which
produces floating point and 16-bit fixed point versions, and one
sort-of integrated which produces 32-bit MDCT.
MIPS weren't all that interested in encoding, so they left the encoder
as-is, and they didn't care much about the ifdeffery, mess or quality - it's
not their problem.
So the MDCT/FFT code has always been a thorn in anyone looking to clean up
code's eye.
Backstory over. Internally AC3 operates on 25-bit fixed-point coefficients.
So for the floating point version, the encoder simply runs the float MDCT,
and converts the resulting coefficients to 25-bit fixed-point, as AC3 is inherently
a fixed-point codec. For the fixed-point version, the input is 16-bit samples,
so to maximize precision the frame samples are analyzed and the highest set
bit is detected via ac3_max_msb_abs_int16(), and the coefficients are then
scaled up via ac3_lshift_int16(), so the input for the FFT is always at least 14 bits,
computed in normalize_samples(). After FFT, the coefficients are scaled up to 25 bits.
This patch simply changes the encoder to accept 32-bit samples, reusing
the already well-optimized 32-bit MDCT code, allowing us to clean up and drop
a large part of a very messy code of ours, as well as prepare for the future lavu/tx
conversion. The coefficients are simply scaled down to 25 bits during windowing,
skipping 2 separate scalings, as the hacks to extend precision are simply no longer
necessary. There's no point in running the MDCT always at 32 bits when you're
going to drop 6 bits off anyway, the headroom is plenty, and the MDCT rounds
properly.
This also makes the encoder even slightly more accurate over the float version,
as there's no coefficient conversion step necessary.
SIZE SAVINGS:
ARM32:
HARDCODED TABLES:
BASE - 10709590
DROP DSP - 10702872 - diff: -6.56KiB
DROP MDCT - 10667932 - diff: -34.12KiB - both: -40.68KiB
DROP FFT - 10336652 - diff: -323.52KiB - all: -364.20KiB
SOFTCODED TABLES:
BASE - 9685096
DROP DSP - 9678378 - diff: -6.56KiB
DROP MDCT - 9643466 - diff: -34.09KiB - both: -40.65KiB
DROP FFT - 9573918 - diff: -67.92KiB - all: -108.57KiB
ARM64:
HARDCODED TABLES:
BASE - 14641112
DROP DSP - 14633806 - diff: -7.13KiB
DROP MDCT - 14604812 - diff: -28.31KiB - both: -35.45KiB
DROP FFT - 14286826 - diff: -310.53KiB - all: -345.98KiB
SOFTCODED TABLES:
BASE - 13636238
DROP DSP - 13628932 - diff: -7.13KiB
DROP MDCT - 13599866 - diff: -28.38KiB - both: -35.52KiB
DROP FFT - 13542080 - diff: -56.43KiB - all: -91.95KiB
x86:
HARDCODED TABLES:
BASE - 12367336
DROP DSP - 12354698 - diff: -12.34KiB
DROP MDCT - 12331024 - diff: -23.12KiB - both: -35.46KiB
DROP FFT - 12029788 - diff: -294.18KiB - all: -329.64KiB
SOFTCODED TABLES:
BASE - 11358094
DROP DSP - 11345456 - diff: -12.34KiB
DROP MDCT - 11321742 - diff: -23.16KiB - both: -35.50KiB
DROP FFT - 11276946 - diff: -43.75KiB - all: -79.25KiB
PERFORMANCE (10min random s32le):
ARM32 - before - 39.9x - 0m15.046s
ARM32 - after - 28.2x - 0m21.525s
Speed: -30%
ARM64 - before - 36.1x - 0m16.637s
ARM64 - after - 36.0x - 0m16.727s
Speed: -0.5%
x86 - before - 184x - 0m3.277s
x86 - after - 190x - 0m3.187s
Speed: +3%
Do it only when requested with the AV_CODEC_EXPORT_DATA_VIDEO_ENC_PARAMS
flag.
Drop previous code using the long-deprecated AV_FRAME_DATA_QP_TABLE*
API. Temporarily disable fate-filter-pp, fate-filter-pp7,
fate-filter-spp. They will be reenabled once these filters are converted
in following commits.
One of the inputs to the fate test has an rgba pixel format which needs
to be converted to rgb32 (argb on big-endian) for the hqx filter. Because auto
scaling in the fate test is disabled, this needs a separate scale
filter.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
It depends on the muxer generating the timestamps, which is deprecated
and scheduled for removal on next bump.
A bunch of tests change timestamps, because of ffmpeg.c is not
generating them correctly. This should be fixed later.
While the FATE suite contains a sample file for Musepack 8, it did not
use it to test the decoder; it is only used in the mpc8-demux test that
tests the demuxer via streamcopy. Therefore this commit adds an actual
encoder test.
The test uses the framecrc output, because Musepack SV8 is an encoder
that returns multiple frames for a single packet, so that timing
information in the test output is valueable. Output seeking has been
used in order to limit the size of the ref file as well as to test this
codepath for the first time.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
av1dec should no longer attempt to output empty frames if another decoder
was used for probing and it sucessfully set a pix_fmt ever since 05872c67a4,
so we can re-add the AV_CODEC_CAP_AVOID_PROBING cap.
Signed-off-by: James Almer <jamrial@gmail.com>
changes since v1:
- made into fate test
- fixed c90 warnings
- tests more intermediate formats
- tested on BE mips too
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Filters mostly work in native endianness, but they must output
a specified endianness, usually little: that requires a final
conversion for big endian.
I do not know what's the deal with gif-deal: inserting explicitly
the filters that are implicitly inserted result in less frames in
output. Probably a strange problem of duration.
This is utilized by various media ingests to figure out the bit
rate of the content you are pushing towards it, so write it for
video, audio and subtitle tracks in case at least one nonzero value
is available. It is only mentioned for timed metadata sample
descriptions in QTFF, so limit it only to ISOBMFF (MODE_MP4) mode.
Updates the FATE tests which have their results changed due to the
20 extra bytes being written per track.
Explicitly insert the scale or aresample filter where it would
have been inserted by the negotiation.
Re-enable conversions if it cannot be done easily.
If a conversion is needed in a test, we want to know about it.
If the negotiation changes and makes new conversion necessary,
we want to know about it even more.
The dvbsubtest_filter.ts sample is a filtered version of the Videolan
sample database (samples/sub/dvbsub/dvbsubtest.ts) using Project X. It
originates from ticket #8844.
Previously, the hls-fmp4 and hls-fmp4_ac3 tests used the same file
names for init and segment files, which occasionally could cause
corruption and failed tests, if the input files for both tests are
generated in parallel, as they could overwrite each other.
This happened to work some of the time, as the fmp4_ac3 test actually
only checked the init segment file (which the fmp4 test case never
wrote, due to using the incorrect hls_segment_type option) and the
fmp4 test case always regenerated the input files due to mismatched
target and file names.
Signed-off-by: Martin Storsjö <martin@martin.st>
Previously, with the file name not matching the target, the files
were regenerated every time fate is rerun - contrary to the other
test targets in the same file. (While regenerating it every time
might be desireable, as that's what the test is about, the file
at least has a dependency on the ffmpeg executable, making them
regenerated every time the executable is updated - and this change
at least makes it consistent with the rest.)
Signed-off-by: Martin Storsjö <martin@martin.st>
Will prevet FATE from breaking once LIBAVCODEC_VERSION_MINOR is bumped to 100.
Reported-by: zane
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
add probeaudiostream for get audio stream's codec_name,codec_time_base,
sample_fmt,channels and channel_layout.
Signed-off-by: Steven Liu <lq@chinaffmpeg.org>
changes since v1
- default behavior, no longer hidden behind decoder parameter
- updated tests to reflect change
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The Matroska muxer writes the Chapters early when chapters were already
available when writing the header; in this case any tags pertaining to
these chapters get written, too.
Yet if no chapters had been supplied before writing the header, Chapters
can also be written when writing the trailer if any are supplied. Tags
belonging to these chapters were up until now completely ignored.
This commit changes this: Writing the tags belonging to chapters has
been moved to mkv_write_chapters(). If mkv_write_tags() has not been
called yet (i.e. when chapters are written when writing the header),
the AVIOContext for writing the ordinary Tags element is used, but not
output, as this is left to mkv_write_tags() in order to only write one
Tags element. Yet if mkv_write_tags() has already been called,
mkv_write_chapters() will output a Tags element of its own which only
contains the tags for chapters.
When chapters are available initially, the corresponding tags will now
be the first tags in the Tags element; but the ordering of tags in Tags
is irrelevant anyway.
This commit also makes chapter_id_offset local to mkv_write_chapters()
as it is used only there and not reused at all.
Potentially writing a second Tags element means that the maximum number
of SeekHead entries had to be incremented. All the changes to FATE
result from the ensuing increase in the amount of space reserved for the
SeekHead (21 bytes more).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
We won't be able to seek back to write the actual duration anyway.
FATE-tests using the md5pipe command had to be updated due to this change.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
have tested on linux x86_32/64, mingw32/64 arm & mips qemu
Tested-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
Tested on x86-32/64, mingw32/64, arm & mips qemu
Tested-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
Several EBML Master elements for which a good upper bound of the final
length was available were nevertheless written without giving an
upper bound of the final length to start_ebml_master(), so that their
length fields were eight bytes long. This has been changed.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Moreover, putting the Cues in front of the Clusters by reserving space
in advance is also tested.
The new capability of using ffprobe during a remux/transcode test are
used here for information about the chapters.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Using random values for TrackUID and FileUID (as happens when the
AVFMT_FLAG_BITEXACT flag is not set) has the obvious downside of making
the output indeterministic. This commit mitigates this by writing the
potentially random values with a fixed size of eight byte, even if their
actual values would fit into less than eight bytes. This ensures that
even in non-bitexact mode, the differences between two files generated
with the same settings are restricted to a few bytes in the header.
(Namely the SegmentUID, the TrackUIDs (in Tracks as well as when
referencing them via TagTrackUID), the FileUIDs (in Attachments as
well as in TagAttachmentUID) as well as the CRC-32 checksums of the
Info, Tracks, Attachments and Tags level-1-elements.) Without this
patch, there might be an offset/a size difference between two such
files.
The FATE-tests had to be updated because the fixed-sized UIDs are also
used in bitexact mode.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
containing updated extradata, in this case a new FLAC streaminfo.
Furthermore, it also tests that the Matroska muxer is able to preserve
uncommon channel layouts by adding Vorbis comments to the CodecPrivate.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
It might be used by the Matroska muxer. This is also the reason why the
FATE-tests for muxing WavPack into Matroska needed to be updated: They
now write the correct version 4.07 and not 4.03 as before.
Reviewed-by: David Bryant <david@wavpack.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
mkvmerge versions 6.2 to 40.0 had a bug that made it not propagate the
WavPack extradata (containing the WavPack version) during remuxing from
a Matroska file; currently our demuxer would treat every WavPack block
encountered as invalid data (unless the WavPack stream is to be
discarded (i.e. the streams discard is >= AVDISCARD_ALL)) and try to
resync to the next level 1 element.
Luckily, the WavPack version is currently not really important; so we
fix this problem by assuming a version. David Bryant, the creator of
WavPack, recommended using version 0x410 (the most recent version) for
this. And this is what this commit does.
A FATE-test for this has been added.
Reviewed-by: David Bryant <david@wavpack.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Up until e7ddafd5, the Matroska muxer wrote two SeekHeads: One at the
beginning referencing the main level 1 elements (i.e. not the Clusters)
and one at the end, referencing the Clusters. This second SeekHead was
useless and has therefore been removed. Yet the SeekHead-related
functions and structures are still geared towards this usecase: They
are built around an allocated array of variable size that gets
reallocated every time an element is added to it although the maximum
number of Seek entries is a small compile-time constant, so that one should
rather include the array in the SeekHead structure itself; and said
structure should be contained in the MatroskaMuxContext instead of being
allocated separately.
The earlier code reserved space for a SeekHead with 10 entries, although
we currently write at most 6. Reducing said number implied that every
Matroska/Webm file will be 84 bytes smaller and required to adapt
several FATE tests; furthermore, the reserved amount overestimated the
amount needed for for the SeekHead's length field and how many bytes
need to be reserved to write a EBML Void element, bringing the total
reduction to 89 bytes.
This also fixes a potential segfault: If !mkv->is_live and if the
AVIOContext is initially unseekable when writing the header, the
SeekHead is already written when writing the header and this used to
free the SeekHead-related structures that have been allocated. But if
the AVIOContext happens to be seekable when writing the trailer, it will
be attempted to write the SeekHead again which will lead to segfaults
because the corresponding structures have already been freed.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The WebM DASH Manifest muxer can write manifests for live streams and
these contain an entry that depends on the time the manifest is written;
an AVOption to make the output reproducible has been added for tests.
But this is unnecessary, as there already is a method for reproducible
output: The AVFMT_FLAG_BITEXACT-flag of the AVFormatContext. Therefore
this commit removes the custom option.
Given that the description of said option contained "private option -
users should never set this" and that it was not documented in
muxers.texi, no deprecation period for this option seemed necessary.
The commands of the FATE-tests for this muxer have been changed to no
longer use this option.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Utilizes a subpicture sample with one decodable subpicture for the
test.
Based on a failing test case in reported by Michael in
https://ffmpeg.org/pipermail/ffmpeg-devel/2019-February/240398.html
which at the time had no test case for it.
Additionally, this is the first test case for the presentation
graphics format.
When a Matroska Block is only stored in compressed form, the size of
the uncompressed block is not explicitly coded and therefore not known
before decompressing it. Therefore the demuxer uses a guess for the
uncompressed size: The first guess is three times the compressed size
and if this is not enough, it is repeatedly incremented by a factor of
three. But when this happens with lzo, the decompression is neither
resumed nor started again. Instead when av_lzo1x_decode indicates that x
bytes of input data could not be decoded, because the output buffer is
already full, the first (not the last) x bytes of the input buffer are
resent for decoding in the next try; they overwrite already decoded
data.
This commit fixes this by instead restarting the decompression anew,
just with a bigger buffer.
This seems to be a regression since 935ec5a1.
A FATE-test for this has been added.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
This test tests that demuxing ProRes that is muxed like it should be in
Matroska (i.e. with the first header ("icpf") atom stripped away) works;
it also tests bz2 decompression as well as the handling of
unknown-length clusters.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
In these cases, we must pass the full path of the file to ffprobe
(as the current working dir on the remote system, e.g. when invoked
with "ssh remote ffprobe ..." isn't the wanted one).
The input filename passed to ffprobe is also included in the output,
which is part of the reference test data. Add a new option to
ffprobe to allow overriding what path is printed, to keep the
original relative path in the tests.
An alternative approach could be an option to allow requesting omitting
the file name from the dumped data, and updating the test references
accordingly.
Signed-off-by: Martin Storsjö <martin@martin.st>
5 cabac states for cbf_cb and cbf_cr are supported according to
Table 9-4.
Add a test for 64x64 4:4:4 8bit HEVC clips with TUDepth = 4, cbf_cr > 0.
Signed-off-by: Xu Guangxin <guangxin.xu@intel.com>
Signed-off-by: Linjie Fu <linjie.fu@intel.com>
Signed-off-by: James Almer <jamrial@gmail.com>
When testing on a memory limited system, these tests consume a
significant amount of memory and can often fail if testing by running
multiple processes in parallel.
Signed-off-by: Martin Storsjö <martin@martin.st>
The IVF muxer autoinserts the av1_metadata filter unconditionally, which is
not desirable for these tests.
Signed-off-by: James Almer <jamrial@gmail.com>
These dependencies are evaluted by make and must be expressed with
the paths as in the local filesystem.
Signed-off-by: Martin Storsjö <martin@martin.st>
The tremolo filter uses floating point internally, and uses
multiplication factors derived from sin(fmod()), neither of
which is bitexact for use with framecrc.
This fixes running this test when built with for mingw/x86_32
with clang.
In this case, a 1 ulp difference in the output from fmod() would
end up in an output from the filter that differs by 1 ulp, but
which makes the lrint() in swresample/audioconvert.c round in a
different direction.
Signed-off-by: Martin Storsjö <martin@martin.st>
contained in Vorbis comments in the CodecPrivate of flac tracks.
Moreover, it also tests header removal compression.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
This test contains a track with zlib compressed CodecPrivate in addition
to compressed frames; the former was unchecked before.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
The flac parser uses a fifo to buffer its data. Consequently, when
searching for sync codes of flac packets, one needs to take care of
the possibility of wraparound. This is done by using an optimized start
code search that works on each of the continuous buffers separately and
by explicitly checking whether the last pre-wrap byte and the first
post-wrap byte constitute a valid sync code.
Moreover, the last MAX_FRAME_HEADER_SIZE - 1 bytes ought not to be searched
for (the start of) a sync code because a header that might be found in this
region might not be completely available. These bytes ought to be searched
lateron when more data is available or when flushing.
Unfortunately there was an off-by-one error in the calculation of the
length to search of the post-wrap buffer: It was too large, because the
calculation was based on the amount of bytes available in the fifo from
the last pre-wrap byte onwards. This meant that a header might be
parsed twice (once prematurely and once regularly when more data is
available); it could also mean that an invalid header will be treated as
valid (namely if the length of said invalid header is
MAX_FRAME_HEADER_SIZE and the invalid byte that will be treated as the
last byte of this potential header happens to be the right CRC-8).
Should a header be parsed twice, the second instance will be the best child
of the first instance; the first instance's score will be
FLAC_HEADER_BASE_SCORE - FLAC_HEADER_CHANGED_PENALTY ( = 3) higher than
the second instance's score. So the frame belonging to the first
instance will be output and it will be done as a zero length frame (the
difference of the header's offset and the child's offset). This has
serious consequences when flushing, as returning a zero length buffer
signals to the caller that no more data will be output; consequently the
last frames not yet output will be dropped.
Furthermore, a "sample/frame number mismatch in adjacent frames" warning
got output when returning the zero-length frame belonging to the first
header, because the child's sample/frame number of course didn't match
the expected sample frame/number given its parent.
filter/hdcd-mix.flac from the FATE-suite was affected by this (the last
frame was omitted) which is the reason why several FATE-tests needed to
be updated.
Fixes ticket #5937.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Right now, the concat filter does not set the frame_rate value on any of
the out links. As a result, the default ffmpeg behaviour kicks in - to
copy the framerate from the first input to the outputs.
If a later input is higher framerate, this results in dropped frames; if
a later input is lower framerate it might cause judder.
This patch checks if all of the video inputs have the same framerate, and
if not it sets the out link to use '1/0' as the frame rate, the value
meaning "unknown/vfr".
A test is added to verify the VFR behaviour. The existing test for CFR
behaviour passes unchanged.
This fixes make fate issue for frame thread scale in my local testing
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
why change .4 to .25, it's for:
one scenecut(pkt_pts=20040) isn't detected by 0.4 threshold
why not change to 0.3 instead of 0.25:
it will miss the scenecut(pkt_pts=20040) after applying the next
patch which enables yuvj420
for fate testing, it's better to catch all scenecut scenes.
Reviewed-by: Marton Balint <cus@passwd.hu>
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
The tests previously rounded the timestamps. Its better in a fate test to preserve
the data from the demuxer and decoder.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Up until now, the length field of most level 1 elements has been written
using eight bytes, although it is known in advance how much space the
content of said elements will take up so that it would be possible to
determine the minimal amount of bytes for the length field. This
commit changes this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>