Do this by making this test a transcode test.
Also fix the test requirements and don't add this test to FATE_AFILTER;
instead use a new variable and a new target for flvenc-tests.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also add a fate-filter-overlays target containing all these tests
and fix the requirements of the tests; furthermore, remove
unnecessary scale filters from filter-overlay-rgba?_rgba.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also fix the requirements of these tests: Only the anaglyph
tests need a scale filter, yet it has been inserted for all tests
without any check for its presence.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
filter-pp and filter-pp7 are the only ones of the filter-pp* tests
that use the file generated by fate-vsynth1-mpeg4-qprd.
Also combine the dependency on this test for all the tests that need it.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The temporary fate-lavf files can easily be removed
if they are not needed as inputs for other tests (mainly
fate-seek-tests). This commit implements this.
The size of the remaining files decreases from 260890083B
to 79481793B.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Extend the ordinary mechanism to signal KEEP for this.
This also allows to remove the keep-parameter from enc_dec,
transcode and stream_remux, so that several empty parameters
'""' could be removed.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The output of this test is just a file containing the positions
of peaks; it is not a wave file and trying to demux it just
returns AVERROR_INVALIDDATA; said error has just been ignored
as the return value from do_avconv_crc is the return value from echo.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It seems as if it was intended to declare fate-gif-color as prerequisite
of the fate-gifenc% tests. Yet the latter do not need anything from
the former, so this would be unnecessary. Furthermore, given that this
line has no associated recipe, it actually cancels implicit rules for
fate-gifenc% instead of adding a prerequisite.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
These tests have basically nothing to do with VPX (they do not even
require the decoder).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The tests in concatdec.mak reuse files created by tests
from lavf-container. Therefore these tests have the other tests
as prerequisite and mostly duplicate their CONFIG-requirements.
(The mxf_d10 tests did it incorrect as they only required
the MXF muxer.) This duplication is of course bad as usual,
so stop it by using the corresponding variable
that contains the non-lavf-container-tests that are enabled
to filter out all the concat-tests without a corresponding enabled
non-concat test.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
These changes are automatically inherited by the fate-seek-tests
based upon lavf-audio.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The new requirements are also automatically inherited
by the FATE_SEEK_LAVF_VIDEO seek-tests.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This automatically fixes the requirements of the fate-seek-acodec*
tests (e.g. 16 of the 27 such tests are now automatically disabled
if the aresample filter is disabled).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This automatically fixes the requirements of the fate-seek-vsynth*
tests (e.g. 16 of the 49 such tests are now automatically disabled
if the scale filter is disabled).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
If one uses a -s command, a scale filter is inserted
even when doing so is redundant. This patch stops
doing so. This makes the tests that don't need libswscale
actually succeed in case it is disabled (only 315 of 470 tests
need it).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Most of the tests in seek.mak use files created by other tests
as input. Therefore these tests have the other tests as prerequisite
and duplicate their CONFIG-requirements. This duplication is of course
bad as usual, so stop it by using the corresponding variable
that contains the non-seek-tests that are enabled to filter out all
the seek-tests without a corresponding enabled non-seek test.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The output files of the lavf tests are highly regular,
allowing to use rules for the src files instead of a list.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Each of the intermediately generated lena-*.fits files is only used
for exactly one test; so it could be deleted right after the test.
Switching to a transcode test (which is also more natural) achieves
this. It also adds checksums of the intermediate files to the ref-file.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In particular, add the missing dependency on the scale and
aresample filters (and therefore on libswscale resp. libswresample).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In particular, add the missing dependency on the scale filter
(and therefore on libswscale).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Add a parameter that allows to add additional requirements.
Also add FILE_PROTOCOL to all the auxiliary functions
that use a demuxer.
Also fix the requirements for the fate-mpegts-probe-(latm|program)
tests. They have misused DEMDEC.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In particular, add the missing dependency on the scale filter
(and therefore on libswscale).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also fix the requirements of fate-mov-channel-description:
It needs the pcm_s16le decoder and the mov demuxer.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
And drop the FATE_CAF_REMUX variables which only existed
to avoid having to repeat the common FILE_PROTOCOL PIPE_PROTOCOL
FRAMECRC_MUXER stuff.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It also adds the missing depenencies on the file and pipe protocols
and the framecrc muxer.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The test also requires a png decoder, which often can be disabled in
cross building setups, where zlib might be missing.
Signed-off-by: Martin Storsjö <martin@martin.st>
This is mostly straightforward. The major complication is that, as a
result of the 16-bit chunk size limitation, ICC profiles may need to be
split up into multiple chunks.
We also need to make sure to allocate enough extra space in the packet
to fit the ICC profile, so modify both mpegvideo_enc.c and ljpegenc.c to
take into account this extra overhead, failing cleanly if necessary.
Also add a FATE transcode test to ensure that the ICC profile gets
written (and read) correctly. Note that this ICC profile is smaller than
64 kB, so this doesn't test the APP2 chunk re-arranging code at all.
Signed-off-by: Niklas Haas <git@haasn.dev>
We re-use the PNGEncContext.zstream for deflate-related operations.
Other than that, the code is pretty straightforward. Special care needs
to be taken to avoid writing more than 79 characters of the profile
description (the maximum supported).
To write the (dynamically sized) deflate-encoded data, we allocate extra
space in the packet and use that directly as a scratch buffer. Modify
png_write_chunk slightly to allow pre-writing the chunk contents like
this.
Also add a FATE transcode test to ensure that the ICC profile gets
encoded correctly.
Signed-off-by: Niklas Haas <git@haasn.dev>
Calculate Spatial Info (SI) and Temporal Info (TI) scores for a video, as defined
in ITU-T P.910: Subjective video quality assessment methods for multimedia
applications.
Note that the benchmarking results for these functions are highly dependent
upon the input data. Therefore, each function is benchmarked twice,
corresponding to the best and worst case complexity of the reference C
implementation. The performance of a real stream decode will fall somewhere
between these two extremes.
Signed-off-by: Ben Avison <bavison@riscosopen.org>
Signed-off-by: Martin Storsjö <martin@martin.st>
The range parameters need to be set up before calling
sws_init_context (which selects which fastpaths can be used;
this gets called by sws_getContext); solely passing them via
sws_setColorspaceDetails isn't enough.
This fixes producing full range YUV range output when doing
YUV->YUV conversions between different YUV color spaces.
Signed-off-by: Martin Storsjö <martin@martin.st>
The sample mpeg4/mpeg4_sstp_dpcm.m4v existed in the FATE-suite,
but it was surprisingly unused.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This long-existing feature calculates subtitle durations by keeping
it around until the following subtitle is decoded, and then utilizes
the following subtitle's pts as the end point of the previous one.
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
Peeking into the muxing queue can improve the estimate of
the lowest timestamp needed for avoid_negative_ts in case
the lowest timestamp is in a packet other than the first packet
to be muxed.
This fixes tickets #4536 and #5784 as well as the output from
the matroska-avoid-negative-ts FATE-test.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
write_packet() has code to shift the packets timestamps
to make them nonnegative or even make them start at ts zero;
this code inspects every packet that is written and if a packet
with negative timestamp (whether this is dts or pts depends upon
another flag; basically: Matroska uses pts, everyone else dts)
is encountered, this is offset to make the timestamp zero.
All further packets will be offset accordingly (with the offset
converted according to the streams' timebases).
This is based around an assumption, namely that the timestamps
are indeed non-decreasing, so that the first packet with negative
timestamps is the first packet with timestamps. This assumption
is often fulfilled given that the default interleavement function
by default interleaves per dts; yet there are scenarios in which
it may not be fulfilled:
a) av_write_frame() instead of av_interleaved_write_frame() is used.
b) The audio_preload option is used.
c) When the timestamps that are made nonnegative/zero are pts
(i.e. with Matroska), because the packet with the smallest dts
is not necessarily the packet with the smallest pts.
d) Possibly with custom interleavement functions.
In these cases the relative sync of the first few packet(s) is offset
relative to the later packets. This contradicts the documentation
("When shifting is enabled, all output timestamps are shifted by
the same amount").
Therefore this commit changes this: As soon as the first packet
with valid timestamps is output, it is checked and recorded whether
the timestamps need to be shifted. Further packets are no longer
checked for needing to be offset; instead they are simply offset.
In the cases above this leads to packets with negative timestamps
(and the appropriate warnings) instead of desync. This will mostly
be fixed in the next commit.
This commit also factors handling the avoid_negative_ts stuff out
of write_packet() in order to be able to return immediately.
Tickets #4536 and #5784 as well as the matroska-avoid-negative-ts-test
are examples of c); as has been said, some timestamps are now negative,
yet the ref file update does not show it because ffmpeg.c sanitizes
the timestamps (-copyts disables it; ffprobe and mkvinfo also show
the original timestamps).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This tests the issue from tickets #4536, #5784;
the output of this test is currently broken.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Tests the parsing and writing of AVDOVIDecoderConfigurationRecord,
when it is present as a Dolby Vision configuration block addition mapping.
Signed-off-by: quietvoid <tcChlisop0@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
To trigger this bug, use `paletteuse=dither=bayer:bayer_scale=0`; you will see
that adjacent pixel lines will use the same dither pattern, instead of being
shifted from each other by 32 units (0x20).
One way to demostrate the bug is:
$ convert -size 64x256 gradient:black-white -rotate 270 grad.png
$ echo 'P2 2 1 255 0 255' > bw.pnm
$ ffmpeg -i grad.png -filter_complex 'movie=bw.pnm,scale=256x1[bw]; [0:v][bw]paletteuse=dither=bayer:bayer_scale=0' gradbw.png
Previously: https://www.rm.cloudns.org/img/uploaded/0bd152c11b9cd99e5945115534b1bdde.png
Now: https://www.rm.cloudns.org/img/uploaded/89caaa5e36c38bc2c01755b30811f969.png
This was caused by passing inconsistent color vs (a,r,g,b) parameters to
color_get(), and NBITS being 5 meaning actually hitting the same cache node
does happen in this case, but ONLY if bayer_scale is zero.
The fix is passing the correct color value to color_get().
Also added a previous-failing FATE test; image comparison of the first frame:
Previously: https://www.rm.cloudns.org/img/uploaded/d0ff9db8d8a7d8a3b8b88bbe92bf5fed.png
Now: https://www.rm.cloudns.org/img/uploaded/a72389707e719b5cd1c58916a9e79ca8.png
(on this less synthetic test image, the bug basically causes noise from cache
hits vs misses)
Tested: FATE passes, which exercises this filter but at the default bayer_scale.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
This is similar to the faststart option of the mov muxer, yet
in contrast to it it works together with reserve_index_space
(the equivalent to reserved_moov_size): If the reserved space
does not suffice, the data is shifted; if not, the Cues are
written at the front without shifting the data.
Several tests that cover (not only) this have been added.
Implements #7017.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
All the AMRWB samples are in a mov container.
Also use FATE_SAMPLES_FFMPEG instead of FATE_SAMPLES_AVCONV.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
To avoid the ref for this growing to a very large size when attaching
the parsed RPU side data. Since this sample does not have any dynamic
metadata, two frames will serve just as well as 100.
Signed-off-by: Niklas Haas <git@haasn.dev>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
They test libavfilter internal API, so they should be libavfilter
test programs (which implies: linked statically to libavfilter
to access internal APIs and linked normally (statically or dynamically
depending upon the build configuration) against all the other libs).
Right now, they are always linked statically against all libs,
which is a significant size waste compared to shared libs as all
of libavcodec has been pulled in despite not being really used.
This also leads to linking failures on systems for which av_export_avutil
is intended: libavcodec does not expect to be linked statically
against the library providing avpriv_(cga|vga16)_font in this case.
This is fixed by this commit.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Fixes FATE failures if e.g. libavdevice is disabled.
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The mpeg4 encoder is slice-threaded and its output depends upon
the number of threads used. Therefore all tests of this encoder
use a hardcoded number of threads (ENC_OPTS in fate-run.sh contains
"-threads 1"; only the vsynth%-mpeg4-thread tests override this
for the mpeg4 encoder, but they also use a hardcoded value to
be consistent across different systems); only the new shortest
and copy-shortest[12] (implicitly due to the sample used) tests
don't and this leads to FATE-failures.
Fix this by explicitly setting the thread count.
Also switch the shortest test to framecrc, because hashing side data
is itchy even though the side data used here (AV_PKT_DATA_QUALITY_STATS)
has a defined endianness.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also covers muxing and demuxing of nonstandard FLAC channel layouts
and the multi-dim-quant option of the FLAC encoder
(all of which was hitherto uncovered).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Provides coverage for the muxer.
(Thanks to tresh for modifying the whitespace commit hook
to allow to push this ref file with tabs.)
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It uses the test-lrc.lrc sample which was added years ago, but never
used until now.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This information is coded in a standard MP4 KindBox and utilizes the
scheme and values as per the DASH role scheme defined in MPEG-DASH.
Other schemes are technically allowed, but where multiple schemes
define the same concepts, the DASH scheme should be utilized.
Such flagging is additionally utilized by the DASH-IF CMAF ingest
specification, enabling an encoder to inform the following component
of the roles of the incoming media streams.
A test is added for this functionality in a similar manner to the
matroska test.
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
They already uncovered an uninitialized-value bug in the ATRAC3 code
in the demuxer; and provide coverage for ID3v2.3.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The current name comes from a time in which libavcodec/utils.c
contained the whole core of libavcodec.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
When a color indexing transform with 16 or fewer colors is used,
WebP uses "pixel packing", i.e. storing several pixels in one byte,
which virtually reduces the width of the image (see WebPContext's
reduced_width field). This reduced_width should always be used when
reading and applying subsequent transforms.
Updated patch with added fate test.
The source image dual_transform.webp can be downloaded by cloning
https://chromium.googlesource.com/webm/libwebp-test-data/
Fixes: 9368
Signed-off-by: James Zern <jzern@google.com>
This muxer was untested up until now; had it been tested, it would
have been obvious that it has been broken for years.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
And remove the unnecessary ffmpeg dependencies while at it.
Reviewed-by: Soft Works <softworkz@hotmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Fixes trac issue #7473.
Removes encoder delay (skip samples) and writes remaining frame samples after EOF to get correct sample count.
Output is now accurate vs players that use Microsoft's codecs (Windows Media Format Runtime).
Tested vs encode>decode WMAv2 with MS's codecs and most sample rate/bit rate/channel/mode combinations in ASF/XWMA.
WMAv1 appears to use the same delay, from FFmpeg samples.
Signed-off-by: bnnm <bananaman255@gmail.com>
subtitles.mak's fate-sub tests utilize a more strict comparator
("rawdiff"), which causes the tests fail in case of white space
differences, such as CRLF vs LF. This in turn causes these
ffprobe-using TTML-in-MP4 tests to fail on non-LF systems such as
Windows or wine.
Includes basic support for both the ISMV ('dfxp') and MP4 ('stpp')
methods. This initial version also foregoes fragmentation support
in case the built-in sample squashing is to be utilized, as this
eases the initial review.
Additionally, add basic tests for both muxing modes in MP4.
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
Up until now, the Matroska muxer did not use the dispositions it is
given as-is; instead it by default overrode the disposition of the first
track of a kind (audio, video, subtitles) if no track of this kind has
the default disposition set. And up until recently, it also enforced
by default that no more than one track of each kind be marked as
default.
The rationale for the former is that there are lots of containers which
lack the concept of default streams, so that it is not uncommon for no
stream to be marked as default at all; the rationale for the latter was
that up until recently, it was dubious whether the Matroska specification
allowed more than one default stream for track type (e.g. mkvmerge
disallowed it). It was this point which led to the implementation of
the above mentioned behaviour inspired by mkvmerge.
Yet the Matroska specifications have changed and now explicitly allow
to set more than one track of each type as default, so that the main
reason of not using the dispositions as-is was rendered moot. Therefore
this commit changes the default to pass the disposition through.
The matroska-mpegts-remux FATE-test has been updated to still use the
old "infer" mode so that it is still covered by FATE; the
matroska-zero-length-block test has also been updated to cover
the infer_no_subs mode. The references for lots of other FATE tests
needed to be updated because of a newly added FlagDefault element with
value zero (whereas a FlagDefault with value 1 needn't be coded at all,
as it coincided with the default value of said element).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also adapt some FATE tests to already cover this.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Adds schema validation for ffprobe XML output so that updating the
ffprobe.xsd file upon changes to ffprobe is not forgotten. This was
suggested by Marton Balint in:
http://ffmpeg.org/pipermail/ffmpeg-devel/2021-March/278428.html
The schema FATE test is only run if xmllint command is available.
Signed-off-by: Tobias Rapp <t.rapp@noa-archive.com>
After fixing AV_PKT_DATA_SKIP_SAMPLES for reading vorbis packets from ogg,
the actual decoded samples become fewer. Three fate tests are failing:
fate-vorbis-20:
The samples in 6.ogg are not frame aligned. 6.pcm file was generated by
ffmpeg before the fix. After the fix, the decoded pcm file does not match
anymore. Ideally the ref file 6.pcm should be updated but it is probably
not worth it including another copy of the same file, only smaller.
SIZE_TOLERANCE is added for this test case.
fate-webm-dash-chapters:
The original vorbis_chapter_extension_demo.ogg is transmuxed to dash-webm.
The ref file webm-dash-chapters needs to be updated.
fate-vorbis-encode:
This exposes another bug in the vorbis encoder that initial_padding is not
correctly set. It is fixed in the previous patch.
Signed-off-by: Guangyu Sun <gsun@roblox.com>
The twoloop coder is highly loaded with (pseudo-)perceptual metrics,
and the aim of the tests is to piece-wise test each function of the
encoder, for which the 'fast' coder is perfect, since it only decides
on which scalefactors to use, rather than enable or disable encoder
features.
This simply performs a 2nd pass if a LSE is encountered with GRAY8
Fixes: tickets/3933/128.jls
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Deprecated in c29038f304.
The resample filter based upon this library has been removed as well.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Signed-off-by: James Almer <jamrial@gmail.com>
This sadly required making changes to the code itself,
due to the same context needing to be reused for both versions.
The lookup table had to be duplicated for both versions.
Notice that the order of the APIC tracks is currently wrong. This is
a superposition of two bugs: (i) Both muxers write the attached
pictures in the order they arrive in the muxer and not in the
stream_index order, leading to attached pictures that are copied being
written earlier because their timestamp is AV_NOPTS_VALUE, whereas the
timestamp of the encoded pictures is 0. (ii) A bug in the id3v2 parsing
code reverses the order of the parsed pictures.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Specifically test that the WebVTT flavour is correctly mapped to
the Matroska/WebM CodecID and back; and test that dispositions
unsupported by WebM are discarded even when they would be supported
by Matroska.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This makes av_read_frame() return packets with proper timestamps.
As a result, seeking now works in combination with streamcopy.
A FATE-test for this has been added.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The test sample has to have no file extension, otherwise probing
happens to work, based off file extension alone, and we want to
test the actual probing function.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Enables writing TTML documents or encoded TTML paragraphs as such
documents.
Additionally, a test for the combined TTML encoder and muxer has
been added to validate that the components still work.
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
Some FATE tests use files created by other FATE tests as input files;
this mostly affects the seek tests which use files from vsynth_lena as
well as acodec-pcm as input files. In order to make this possible the
temporary files of all the vsynth* and all acodec-pcm tests are kept.
Yet only a fraction of these files are actually used. This commit
changes this to only keep the files that are actually needed for other
tests. This reduces the size of the tests/data/fate folder after a full
FATE run from 2024727441B to 138739312B.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
AVID streams - currently handled by the AVRN decoder - can be (depending
on extradata contents) either MJPEG or raw video. To decode the MJPEG
variant, the AVRN decoder currently instantiates a MJPEG decoder
internally and forwards decoded frames to the caller (possibly after
cropping them).
This is suboptimal, because the AVRN decoder does not forward all the
features of the internal MJPEG decoder, such as direct rendering.
Handling such forwarding in a full and generic manner would be quite
hard, so it is simpler to just handle those streams in the MJPEG decoder
directly.
The AVRN decoder, which now handles only the raw streams, can now be
marked as supporting direct rendering.
This also removes the last remaining internal use of the obsolete
decoding API.
This provides coverage for writing BlockGroups with BlockAdditional
and ReferenceBlock elements. It also tests setting the hearing impaired
disposition (it fits given that this video has no audio so one needs to
be able to read lips to understand anything).
Reviewed-by: Ridley Combs <rcombs@rcombs.me>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The FATE suite already contains a file containing mastering display
and content light level metadata: Meridian-Apple_ProResProxy-HDR10.mxf
This file is used to test both the Matroska muxer and demuxer.
Reviewed-by: Ridley Combs <rcombs@rcombs.me>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The mxf_d10 muxer is very picky regarding the input it accepts:
The only video accepted is MPEG-2 with absolutely constant bitrate,
i.e. all packets need to have exactly the same size; and only a few
bitrates are accepted.
The sample file used did not abide by this: Writing the first packet
(a video packet) errors out and afterwards an audio packet from the
muxing queue has been written. That's all besides metadata (which this
test is about). The FFmpeg cli returned an error, but said error has
been ignored by the md5 test.
This commit changes the test to actually send a compliant stream to the
muxer, so that it does not error out; furthermore, the test is changed
to explicitly check the metadata instead of it only being implicitly
included in the md5 checksum. The compliant stream is created by our
encoder at runtime.
Finally, the test now also covers writing user-specified
product/company/version identification.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Also, test modifying colorspace properties and the default_mode
passthrough which is used here to create a file that has no default
track at all.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
It furthermore tests the demuxer's handling of chained SeekHeads,
level 1-elements after the Clusters and the muxer's capability of
writing huge TrackNumbers as well as expanding the Cues' length field
by one byte if necessary to fill the reserved space. It also tests
propagation of metadata.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>