Also add a fate-filter-overlays target containing all these tests
and fix the requirements of the tests; furthermore, remove
unnecessary scale filters from filter-overlay-rgba?_rgba.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also fix the requirements of these tests: Only the anaglyph
tests need a scale filter, yet it has been inserted for all tests
without any check for its presence.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Lots of tests use the framecrc command together with some filters,
so adding a special function for it seems worthwhile. This commit
adds one new one and modifies an already existing one:
All users of FILTERDEMDEC already use framecrc and the more general
FILTERDEMDECENCMUX can be used in scenarios where more control over
the used encoders/muxers is needed, so use this in cases where
an actual input file is involved.
Furthermore, add FILTERFRAMECRC for the cases where no demuxing/decoding
occurs, because the input is generated via lavfi.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It is unused and given that one needs an encoder to produce
packets from AVFrames (as output by filters) this is likely
to remain so, because FILTERDEMDECENCMUX is better for these
scenarios.
The only case where one can use filters without encoders is
with the lavfi input device: It outputs AVPackets which could
be copied without another conversion to AVFrames. Yet the variable
to check for this is CONFIG_LAVFI_INDEV, but FILTERDEMDECMUX
is designed to work with demuxers (i.e. CONFIG_*_DEMUXER).
So there is no usecase for FILTERDEMDECMUX.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
filter-pp and filter-pp7 are the only ones of the filter-pp* tests
that use the file generated by fate-vsynth1-mpeg4-qprd.
Also combine the dependency on this test for all the tests that need it.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The temporary fate-lavf files can easily be removed
if they are not needed as inputs for other tests (mainly
fate-seek-tests). This commit implements this.
The size of the remaining files decreases from 260890083B
to 79481793B.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Extend the ordinary mechanism to signal KEEP for this.
This also allows to remove the keep-parameter from enc_dec,
transcode and stream_remux, so that several empty parameters
'""' could be removed.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
These CRC-only files (the output of the CRC-muxer) are only used once,
so they need not be preserved. Furthermore, errors from ffmpeg (used
for creating the CRC) are no longer ignored with this patch.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The output of this test is just a file containing the positions
of peaks; it is not a wave file and trying to demux it just
returns AVERROR_INVALIDDATA; said error has just been ignored
as the return value from do_avconv_crc is the return value from echo.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavc/v210_enc.o
and also allows to inline ff_v210enc_init() irrespectively of
interposing.
This dependency pulled basically all of libavcodec into checkasm,
in particular all codecs.
This also makes checkasm work when using shared Windows builds:
On Windows, it needs to be known to the compiler whether a data
symbol is external to the library/executable or not; hence the
need for av_export_avutil. checkasm needs access to the internals
of the libraries it tests and is therefore linked statically to all
the libraries. This means that the users of avpriv_cga_font and
avpriv_vga16_font in libavcodec (namely ansi.o, bintext.o, tmv.o)
end up in the same executable as the symbols, although they have
been compiled as if these symbols were external, leading to linker
errors. With this commit said files are discarded by the linker,
bypassing this problem.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavc/v210_dec.o
and also allows to inline ff_v210dec_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_threshold.o
and also allows to inline ff_threshold_init() irrespectively of
interposing.
With this patch checkasm no longer pulls all of lavfi and lavf in.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_nlmeans.o
and also allows to inline ff_nlmeans_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_hflip.o
and also allows to inline ff_hflip_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_gblur.o
and also allows to inline ff_gblur_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_blend.o
and also allows to inline ff_blend_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Only the AudioFIRDSPContext and the functions for its initialization
are needed outside of lavfi/af_afir.c.
Also rename the header to af_afirdsp.h to reflect the change.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Accidentally resurrected in fc49f22c3b
and 7711f19eda,
forgotten in 6ebc71847e and
1a6a088f7c or never needed
(filter-aemphasis).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It seems as if it was intended to declare fate-gif-color as prerequisite
of the fate-gifenc% tests. Yet the latter do not need anything from
the former, so this would be unnecessary. Furthermore, given that this
line has no associated recipe, it actually cancels implicit rules for
fate-gifenc% instead of adding a prerequisite.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
These tests have basically nothing to do with VPX (they do not even
require the decoder).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The tests in concatdec.mak reuse files created by tests
from lavf-container. Therefore these tests have the other tests
as prerequisite and mostly duplicate their CONFIG-requirements.
(The mxf_d10 tests did it incorrect as they only required
the MXF muxer.) This duplication is of course bad as usual,
so stop it by using the corresponding variable
that contains the non-lavf-container-tests that are enabled
to filter out all the concat-tests without a corresponding enabled
non-concat test.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
These changes are automatically inherited by the fate-seek-tests
based upon lavf-audio.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The new requirements are also automatically inherited
by the FATE_SEEK_LAVF_VIDEO seek-tests.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This automatically fixes the requirements of the fate-seek-acodec*
tests (e.g. 16 of the 27 such tests are now automatically disabled
if the aresample filter is disabled).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This automatically fixes the requirements of the fate-seek-vsynth*
tests (e.g. 16 of the 49 such tests are now automatically disabled
if the scale filter is disabled).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
If one uses a -s command, a scale filter is inserted
even when doing so is redundant. This patch stops
doing so. This makes the tests that don't need libswscale
actually succeed in case it is disabled (only 315 of 470 tests
need it).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Most of the tests in seek.mak use files created by other tests
as input. Therefore these tests have the other tests as prerequisite
and duplicate their CONFIG-requirements. This duplication is of course
bad as usual, so stop it by using the corresponding variable
that contains the non-seek-tests that are enabled to filter out all
the seek-tests without a corresponding enabled non-seek test.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The output files of the lavf tests are highly regular,
allowing to use rules for the src files instead of a list.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Each of the intermediately generated lena-*.fits files is only used
for exactly one test; so it could be deleted right after the test.
Switching to a transcode test (which is also more natural) achieves
this. It also adds checksums of the intermediate files to the ref-file.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In particular, add the missing dependency on the scale and
aresample filters (and therefore on libswscale resp. libswresample).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In particular, add the missing dependency on the scale filter
(and therefore on libswscale).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Intended for scenarios that currently use DEMDEC, but are missing
the requirements that are implicitly needed by framecrc.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Add a parameter that allows to add additional requirements.
Also add FILE_PROTOCOL to all the auxiliary functions
that use a demuxer.
Also fix the requirements for the fate-mpegts-probe-(latm|program)
tests. They have misused DEMDEC.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In particular, add the missing dependency on the scale filter
(and therefore on libswscale).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also fix the requirements of fate-mov-channel-description:
It needs the pcm_s16le decoder and the mov demuxer.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
And drop the FATE_CAF_REMUX variables which only existed
to avoid having to repeat the common FILE_PROTOCOL PIPE_PROTOCOL
FRAMECRC_MUXER stuff.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It also adds the missing depenencies on the file and pipe protocols
and the framecrc muxer.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Tests using the transcode and stream_remux functions have some common
requirements (namely the file and pipe protocols as well as the framecrc
muxer) and also other commonalities: The create a file and read it
immediately afterwards, so that they typically rely on a corresponding
muxer+demuxer pair which typically shares the same name; for transcode
(if it does not use stream copy) the same is true for encoders and
decoders. This means that using special Makefile-functions instead
of the general ALLYES is worthwhile. This commit adds such functions.
These functions allow to add arbitrary CONFIG-checks on top of the
aforementioned ones in order to satisfy special needs (for e.g. parsers,
filters) that several intended users have.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The test also requires a png decoder, which often can be disabled in
cross building setups, where zlib might be missing.
Signed-off-by: Martin Storsjö <martin@martin.st>
This is mostly straightforward. The major complication is that, as a
result of the 16-bit chunk size limitation, ICC profiles may need to be
split up into multiple chunks.
We also need to make sure to allocate enough extra space in the packet
to fit the ICC profile, so modify both mpegvideo_enc.c and ljpegenc.c to
take into account this extra overhead, failing cleanly if necessary.
Also add a FATE transcode test to ensure that the ICC profile gets
written (and read) correctly. Note that this ICC profile is smaller than
64 kB, so this doesn't test the APP2 chunk re-arranging code at all.
Signed-off-by: Niklas Haas <git@haasn.dev>
We re-use the PNGEncContext.zstream for deflate-related operations.
Other than that, the code is pretty straightforward. Special care needs
to be taken to avoid writing more than 79 characters of the profile
description (the maximum supported).
To write the (dynamically sized) deflate-encoded data, we allocate extra
space in the packet and use that directly as a scratch buffer. Modify
png_write_chunk slightly to allow pre-writing the chunk contents like
this.
Also add a FATE transcode test to ensure that the ICC profile gets
encoded correctly.
Signed-off-by: Niklas Haas <git@haasn.dev>
On empty input the awk script was always successful which caused the
filter-refcmp tests to always succeed.
Also fix the command lines for refcmp_metadata compare function because it
needs auto conversion filters, and update reference of test
filter-refcmp-psnr-rgb because it was missed in
a7fc78c1a6 but was never noticed due to the
original issue...
Signed-off-by: Marton Balint <cus@passwd.hu>
Calculate Spatial Info (SI) and Temporal Info (TI) scores for a video, as defined
in ITU-T P.910: Subjective video quality assessment methods for multimedia
applications.
This test deliberately doesn't exercise the full range of inputs described in
the committee draft VC-1 standard. It says:
input coefficients in frequency domain, D, satisfy -2048 <= D < 2047
intermediate coefficients, E, satisfy -4096 <= E < 4095
fully inverse-transformed coefficients, R, satisfy -512 <= R < 511
For one thing, the inequalities look odd. Did they mean them to go the
other way round? That would make more sense because the equations generally
both add and subtract coefficients multiplied by constants, including powers
of 2. Requiring the most-negative values to be valid extends the number of
bits to represent the intermediate values just for the sake of that one case!
For another thing, the extreme values don't look to occur in real streams -
both in my experience and supported by the following comment in the AArch32
decoder:
tNhalf is half of the value of tN (as described in vc1_inv_trans_8x8_c).
This is done because sometimes files have input that causes tN + tM to
overflow. To avoid this overflow, we compute tNhalf, then compute
tNhalf + tM (which doesn't overflow), and then we use vhadd to compute
(tNhalf + (tNhalf + tM)) >> 1 which does not overflow because it is
one instruction.
My AArch64 decoder goes further than this. It calculates tNhalf and tM
then does an SRA (essentially a fused halve and add) to compute
(tN + tM) >> 1 without ever having to hold (tNhalf + tM) in a 16-bit element
without overflowing. It only encounters difficulties if either tNhalf or
tM overflow in isolation.
I haven't had sight of the final standard, so it's possible that these
issues were dealt with during finalisation, which could explain the lack
of usage of extreme inputs in real streams. Or a preponderance of decoders
that only support 16-bit intermediate values in their inverse transforms
might have caused encoders to steer clear of such cases.
I have effectively followed this approach in the test, and limited the
scale of the coefficients sufficient that both the existing AArch32 decoder
and my new AArch64 decoder both pass.
Signed-off-by: Ben Avison <bavison@riscosopen.org>
Signed-off-by: Martin Storsjö <martin@martin.st>
Note that the benchmarking results for these functions are highly dependent
upon the input data. Therefore, each function is benchmarked twice,
corresponding to the best and worst case complexity of the reference C
implementation. The performance of a real stream decode will fall somewhere
between these two extremes.
Signed-off-by: Ben Avison <bavison@riscosopen.org>
Signed-off-by: Martin Storsjö <martin@martin.st>
tiny_ssim is built for the build host, not for the target platform.
Therefore, it mustn't include the config.h header, which is set up
specifically for the target platform and compiler.
This fixes cross building for older WinStore platforms, where
config.h contains "#define getenv(x) NULL".
Signed-off-by: Martin Storsjö <martin@martin.st>
This avoids unnecessary rebuilds of most source files if only the
list of enabled components has changed, but not the other properties
of the build, set in config.h.
Signed-off-by: Martin Storsjö <martin@martin.st>
The range parameters need to be set up before calling
sws_init_context (which selects which fastpaths can be used;
this gets called by sws_getContext); solely passing them via
sws_setColorspaceDetails isn't enough.
This fixes producing full range YUV range output when doing
YUV->YUV conversions between different YUV color spaces.
Signed-off-by: Martin Storsjö <martin@martin.st>
The IMF demuxer does not set the DTS and PTS of packets accurately in all
scenarios. Moreover, audio packets are not trimmed when they exceed the
duration of the underlying resource.
imf-cpl-with-repeat FATE ref file is regenerated.
Addresses https://trac.ffmpeg.org/ticket/9611
The sample mpeg4/mpeg4_sstp_dpcm.m4v existed in the FATE-suite,
but it was surprisingly unused.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This long-existing feature calculates subtitle durations by keeping
it around until the following subtitle is decoded, and then utilizes
the following subtitle's pts as the end point of the previous one.
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
Peeking into the muxing queue can improve the estimate of
the lowest timestamp needed for avoid_negative_ts in case
the lowest timestamp is in a packet other than the first packet
to be muxed.
This fixes tickets #4536 and #5784 as well as the output from
the matroska-avoid-negative-ts FATE-test.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
write_packet() has code to shift the packets timestamps
to make them nonnegative or even make them start at ts zero;
this code inspects every packet that is written and if a packet
with negative timestamp (whether this is dts or pts depends upon
another flag; basically: Matroska uses pts, everyone else dts)
is encountered, this is offset to make the timestamp zero.
All further packets will be offset accordingly (with the offset
converted according to the streams' timebases).
This is based around an assumption, namely that the timestamps
are indeed non-decreasing, so that the first packet with negative
timestamps is the first packet with timestamps. This assumption
is often fulfilled given that the default interleavement function
by default interleaves per dts; yet there are scenarios in which
it may not be fulfilled:
a) av_write_frame() instead of av_interleaved_write_frame() is used.
b) The audio_preload option is used.
c) When the timestamps that are made nonnegative/zero are pts
(i.e. with Matroska), because the packet with the smallest dts
is not necessarily the packet with the smallest pts.
d) Possibly with custom interleavement functions.
In these cases the relative sync of the first few packet(s) is offset
relative to the later packets. This contradicts the documentation
("When shifting is enabled, all output timestamps are shifted by
the same amount").
Therefore this commit changes this: As soon as the first packet
with valid timestamps is output, it is checked and recorded whether
the timestamps need to be shifted. Further packets are no longer
checked for needing to be offset; instead they are simply offset.
In the cases above this leads to packets with negative timestamps
(and the appropriate warnings) instead of desync. This will mostly
be fixed in the next commit.
This commit also factors handling the avoid_negative_ts stuff out
of write_packet() in order to be able to return immediately.
Tickets #4536 and #5784 as well as the matroska-avoid-negative-ts-test
are examples of c); as has been said, some timestamps are now negative,
yet the ref file update does not show it because ffmpeg.c sanitizes
the timestamps (-copyts disables it; ffprobe and mkvinfo also show
the original timestamps).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This tests the issue from tickets #4536, #5784;
the output of this test is currently broken.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Tests the parsing and writing of AVDOVIDecoderConfigurationRecord,
when it is present as a Dolby Vision configuration block addition mapping.
Signed-off-by: quietvoid <tcChlisop0@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Up until now, the WebM variant of WebVTT subtitles has been handled
specially: It had its own function to write it, because the data
had to be reformatted before writing. But given that other codecs
also need reformatting, this is no good reason to also duplicate the
generic stuff for writing Block(Group)s.
This commit therefore uses an ordinary reformatting function for
this task; writing WebVTT subtitles now uses the generic code
and therefore automatically uses the least amount of bytes
for its BlockGroup length fields whereas the earlier code used
an overestimation for the length of the Duration element.
This is the reason for the changes to the webm-webvtt-remux FATE-test.
(This commit does not implement support for Matroska's way of muxing
WebVTT; it also does not add checks to ensure that WebM-style subtitles
don't get muxed in Matroska. But the function for reformatting gets a
webm prefix to indicate that this is for WebM.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This commit uses the new EbmlWriter API to write the length fields
of the BlockGroup and its descendants that are themselves Master
elements (namely BlockAdditions and BlockMore) on the least amount of
bytes.
This fixes regressions introduced when the special code for writing
general subtitles was removed. Accordingly, the binsub-mksenc and
matroska-zero-length-block FATE-tests have now been reverted back
to their old state again; the advantages of this approach are evident
with the matroska-vp8-alpha-remux test which up until now wrote
all the length fields of all BlockGroups, BlockAdditions and BlockMore
on eight bytes.
Using the EbmlWriter API also allowed to improve locality in
mkv_write_block(): E.g. both DiscardPadding as well as the
BlockAdditional side-data are now directly used to add elements
to the writer whereas the earlier code had to first check
for whether a BlockGroup should be used and then check again
(after the place where a BlockGroup would be opened if one were
used) for whether there is DiscardPadding or BlockAdditional
side-data to write.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Once upon a time, mkv_write_block() only wrote a (Simple)Block,
not a BlockGroup which is needed for subtitles to convey
the duration. But with the introduction of support for writing
BlockAdditions and DiscardPadding (both of which require a BlockGroup),
mkv_write_block() can also open and close a BlockGroup of its own. This
naturally led to some code duplication which is removed in this commit.
This new code leads to one regression: It always uses eight bytes for
the BlockGroup's length field, whereas the earlier code usually used the
lowest amount of bytes needed. This will be fixed in a future commit.
This temporary regression is also the reason for changes to the
binsub-mksenc and matroska-zero-length-block fate tests.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also check the (user-provided) tags for being overlong; the earlier
code had an implicit unchecked size_t->int conversion.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
To trigger this bug, use `paletteuse=dither=bayer:bayer_scale=0`; you will see
that adjacent pixel lines will use the same dither pattern, instead of being
shifted from each other by 32 units (0x20).
One way to demostrate the bug is:
$ convert -size 64x256 gradient:black-white -rotate 270 grad.png
$ echo 'P2 2 1 255 0 255' > bw.pnm
$ ffmpeg -i grad.png -filter_complex 'movie=bw.pnm,scale=256x1[bw]; [0:v][bw]paletteuse=dither=bayer:bayer_scale=0' gradbw.png
Previously: https://www.rm.cloudns.org/img/uploaded/0bd152c11b9cd99e5945115534b1bdde.png
Now: https://www.rm.cloudns.org/img/uploaded/89caaa5e36c38bc2c01755b30811f969.png
This was caused by passing inconsistent color vs (a,r,g,b) parameters to
color_get(), and NBITS being 5 meaning actually hitting the same cache node
does happen in this case, but ONLY if bayer_scale is zero.
The fix is passing the correct color value to color_get().
Also added a previous-failing FATE test; image comparison of the first frame:
Previously: https://www.rm.cloudns.org/img/uploaded/d0ff9db8d8a7d8a3b8b88bbe92bf5fed.png
Now: https://www.rm.cloudns.org/img/uploaded/a72389707e719b5cd1c58916a9e79ca8.png
(on this less synthetic test image, the bug basically causes noise from cache
hits vs misses)
Tested: FATE passes, which exercises this filter but at the default bayer_scale.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
This is similar to the faststart option of the mov muxer, yet
in contrast to it it works together with reserve_index_space
(the equivalent to reserved_moov_size): If the reserved space
does not suffice, the data is shifted; if not, the Cues are
written at the front without shifting the data.
Several tests that cover (not only) this have been added.
Implements #7017.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It returns a pointer inside the fifo's buffer, which cannot be safely
used without accessing AVFifoBuffer internals. It is easier and safer to
use av_fifo_generic_peek_at().
mvhd and tkhd present the post-editlist duration, while mdhd should
have the pre-editlist duration. Regression since c2424b1f3.
Signed-off-by: Martin Storsjö <martin@martin.st>
All the AMRWB samples are in a mov container.
Also use FATE_SAMPLES_FFMPEG instead of FATE_SAMPLES_AVCONV.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
- No longer mixes u8 and u16 component accesses (this was UB)
- De-duplicated 8->16 conversion
- De-duplicated component -> plane+offset conversion
- De-duplicated planar + packed RGB
- No longer calls ff_fill_rgba_map
- Removed redundant comp_mask data member
- RGB0 and related formats no longer write an alpha value to the 0 byte
- Non-planar YA formats now work correctly
- High-bit-depth semi-planar YUV now works correctly
And expose the parsed values as frame side data. Update FATE results to
match.
It's worth documenting that this relies on the dovi configuration record
being present on the first AVPacket fed to the decoder, which in
practice is the case if if the API user has called something like
av_format_inject_global_side_data, which is unfortunately not the
default.
This commit is not the time and place to change that behavior, though.
Signed-off-by: Niklas Haas <git@haasn.dev>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
To avoid the ref for this growing to a very large size when attaching
the parsed RPU side data. Since this sample does not have any dynamic
metadata, two frames will serve just as well as 100.
Signed-off-by: Niklas Haas <git@haasn.dev>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The values for the essence element type were updated in the spec
from 0x05/0x06 (ST2019-4 2008) to 0x0C/0x0D (ST2019-4 2009).
Fixes ticket #6380.
Thanks-to: Philip de Nier <philip.denier@bbc.co.uk>
Thanks-to: Matthieu Bouron <matthieu.bouron@gmail.com>
Reviewed-by: Matthieu Bouron <matthieu.bouron@gmail.com>
Reviewed-by: Tomas Härdin <tjoppen@acc.umu.se>
Signed-off-by: Nicolas Gaullier <nicolas.gaullier@cji.paris>
Signed-off-by: Marton Balint <cus@passwd.hu>
Adds support for concat demuxer to copy the side data information
from the input file to the resulting file. It will behave like the
metadata copy, where the metadata of the first file is kept in the
the output file.
Extract the current code that already performs the stream side_data
copy into a separate method and reuse the method in the concat demuxer.
Signed-off-by: Gerard Sole <g.sole.ca@gmail.com>
They test libavfilter internal API, so they should be libavfilter
test programs (which implies: linked statically to libavfilter
to access internal APIs and linked normally (statically or dynamically
depending upon the build configuration) against all the other libs).
Right now, they are always linked statically against all libs,
which is a significant size waste compared to shared libs as all
of libavcodec has been pulled in despite not being really used.
This also leads to linking failures on systems for which av_export_avutil
is intended: libavcodec does not expect to be linked statically
against the library providing avpriv_(cga|vga16)_font in this case.
This is fixed by this commit.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Fixes so that fate under 64 bit Windows passes.
These functions replace all ff_hscale8to15_*_ssse3 when avx2 is available.
Signed-off-by: James Almer <jamrial@gmail.com>
LSX and LASX is loongarch SIMD extention.
They are enabled by default if compiler support it, and can be disabled
with '--disable-lsx' '--disable-lasx'.
Change-Id: Ie2608ea61dbd9b7fffadbf0ec2348bad6c124476
Reviewed-by: Shiyou Yin <yinshiyou-hf@loongson.cn>
Reviewed-by: guxiwei <guxiwei-hf@loongson.cn>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes FATE failures if e.g. libavdevice is disabled.
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The mpeg4 encoder is slice-threaded and its output depends upon
the number of threads used. Therefore all tests of this encoder
use a hardcoded number of threads (ENC_OPTS in fate-run.sh contains
"-threads 1"; only the vsynth%-mpeg4-thread tests override this
for the mpeg4 encoder, but they also use a hardcoded value to
be consistent across different systems); only the new shortest
and copy-shortest[12] (implicitly due to the sample used) tests
don't and this leads to FATE-failures.
Fix this by explicitly setting the thread count.
Also switch the shortest test to framecrc, because hashing side data
is itchy even though the side data used here (AV_PKT_DATA_QUALITY_STATS)
has a defined endianness.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Currently, the code doing this is spread over several places and may
behave in unexpected ways. E.g. automatic 'default' marking is only done
for streams fed by complex filtergraphs. It is also applied in the order
in which the output streams are initialized, which is effectively
random.
Move processing the dispositions at the end of open_output_file(), when
we already have all the necessary information.
Apply the automatic default marking only if no explicit -disposition
options were supplied by the user, and apply it to the first stream of
each type (excluding attached pics) when there is more than one stream
of that type and no default markings were copied from the input streams.
Explicitly document the new behavior.
Changes the results of some tests, where the output file gets a default
disposition, while it previously did not.
Also covers muxing and demuxing of nonstandard FLAC channel layouts
and the multi-dim-quant option of the FLAC encoder
(all of which was hitherto uncovered).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This allows nicer tests by having a greater range of inputs available
(without requiring adding further samples to the fate-suite).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Provides coverage for the muxer.
(Thanks to tresh for modifying the whitespace commit hook
to allow to push this ref file with tabs.)
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It uses the test-lrc.lrc sample which was added years ago, but never
used until now.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This information is coded in a standard MP4 KindBox and utilizes the
scheme and values as per the DASH role scheme defined in MPEG-DASH.
Other schemes are technically allowed, but where multiple schemes
define the same concepts, the DASH scheme should be utilized.
Such flagging is additionally utilized by the DASH-IF CMAF ingest
specification, enabling an encoder to inform the following component
of the roles of the incoming media streams.
A test is added for this functionality in a similar manner to the
matroska test.
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
This patch increases several stack buffers in order to fix
stack-buffer-overflows (e.g. in put_hevc_qpel_uni_hv_9 in
line 814 of hevcdsp_template.c) detected with ASAN in the hevc_pel
checkasm test.
The buffers are increased by the minimal amount necessary
in order not to mask potential future bugs.
Reviewed-by: Martin Storsjö <martin@martin.st>
Reviewed-by: "zhilizhao(赵志立)" <quinkblack@foxmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
They already uncovered an uninitialized-value bug in the ATRAC3 code
in the demuxer; and provide coverage for ID3v2.3.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The new format (given in big/little endian forms) matches the
existing X2RGB10 format, except with B and R channels switched.
AV_PIX_FMT_X2BGR10 data often is created by OpenGL programs
whose buffers use the GL_RGB10 internal format.
Signed-off-by: Manuel Stoeckl <code@mstoeckl.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This resolves a problem where conversions from YUV to X2RGB10LE
would produce color values a factor 4 too small, because an 8-bit
value was placed in a 10-bit channel.
Signed-off-by: Manuel Stoeckl <code@mstoeckl.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The current name comes from a time in which libavcodec/utils.c
contained the whole core of libavcodec.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
When a color indexing transform with 16 or fewer colors is used,
WebP uses "pixel packing", i.e. storing several pixels in one byte,
which virtually reduces the width of the image (see WebPContext's
reduced_width field). This reduced_width should always be used when
reading and applying subsequent transforms.
Updated patch with added fate test.
The source image dual_transform.webp can be downloaded by cloning
https://chromium.googlesource.com/webm/libwebp-test-data/
Fixes: 9368
Signed-off-by: James Zern <jzern@google.com>
This muxer was untested up until now; had it been tested, it would
have been obvious that it has been broken for years.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
And remove the unnecessary ffmpeg dependencies while at it.
Reviewed-by: Soft Works <softworkz@hotmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Fixes trac issue #7473.
Removes encoder delay (skip samples) and writes remaining frame samples after EOF to get correct sample count.
Output is now accurate vs players that use Microsoft's codecs (Windows Media Format Runtime).
Tested vs encode>decode WMAv2 with MS's codecs and most sample rate/bit rate/channel/mode combinations in ASF/XWMA.
WMAv1 appears to use the same delay, from FFmpeg samples.
Signed-off-by: bnnm <bananaman255@gmail.com>
subtitles.mak's fate-sub tests utilize a more strict comparator
("rawdiff"), which causes the tests fail in case of white space
differences, such as CRLF vs LF. This in turn causes these
ffprobe-using TTML-in-MP4 tests to fail on non-LF systems such as
Windows or wine.
Includes basic support for both the ISMV ('dfxp') and MP4 ('stpp')
methods. This initial version also foregoes fragmentation support
in case the built-in sample squashing is to be utilized, as this
eases the initial review.
Additionally, add basic tests for both muxing modes in MP4.
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
Up until now, the Matroska muxer did not use the dispositions it is
given as-is; instead it by default overrode the disposition of the first
track of a kind (audio, video, subtitles) if no track of this kind has
the default disposition set. And up until recently, it also enforced
by default that no more than one track of each kind be marked as
default.
The rationale for the former is that there are lots of containers which
lack the concept of default streams, so that it is not uncommon for no
stream to be marked as default at all; the rationale for the latter was
that up until recently, it was dubious whether the Matroska specification
allowed more than one default stream for track type (e.g. mkvmerge
disallowed it). It was this point which led to the implementation of
the above mentioned behaviour inspired by mkvmerge.
Yet the Matroska specifications have changed and now explicitly allow
to set more than one track of each type as default, so that the main
reason of not using the dispositions as-is was rendered moot. Therefore
this commit changes the default to pass the disposition through.
The matroska-mpegts-remux FATE-test has been updated to still use the
old "infer" mode so that it is still covered by FATE; the
matroska-zero-length-block test has also been updated to cover
the infer_no_subs mode. The references for lots of other FATE tests
needed to be updated because of a newly added FlagDefault element with
value zero (whereas a FlagDefault with value 1 needn't be coded at all,
as it coincided with the default value of said element).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The Matroska specifications have evolved and now allow to mark
multiple tracks of the same kind as default (whether this was legal or
not before was dubious; e.g. mkvmerge disallowed it). Yet when the
Matroska muxer is set to infer default dispositions if absent, it also
enforced the now outdated restriction. So update this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The MOV muxer can store streamids as track ids but they aren't
visible when probing the result via lavf/dump or ffprobe due to
lack of this flag in the demuxer.
Also adapt some FATE tests to already cover this.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
677a030b26 introduced more printable
side data types in ffprobe, however the Audio Service Type side data
'type' field that was introduced aliases an existing field of the same
name within the side data array, which can lead to JSON output like:
"side_data_list": [
{
"side_data_type": "Audio Service Type",
"type": 0
},
{
"side_data_type": "Stereo 3D",
"type": "side by side",
"inverted": 1
}
]
This, while technically valid JSON, is considered bad practice, since it
forces all downstream users to manually parse it and check all types;
it makes simple deserialization impossible. Worse, in som loosely
type languages, it can lead to silent bugs if exising code assumed
it was a different type.
As such, rename this second "type" field to "service_type".
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Adds schema validation for ffprobe XML output so that updating the
ffprobe.xsd file upon changes to ffprobe is not forgotten. This was
suggested by Marton Balint in:
http://ffmpeg.org/pipermail/ffmpeg-devel/2021-March/278428.html
The schema FATE test is only run if xmllint command is available.
Signed-off-by: Tobias Rapp <t.rapp@noa-archive.com>
After fixing AV_PKT_DATA_SKIP_SAMPLES for reading vorbis packets from ogg,
the actual decoded samples become fewer. Three fate tests are failing:
fate-vorbis-20:
The samples in 6.ogg are not frame aligned. 6.pcm file was generated by
ffmpeg before the fix. After the fix, the decoded pcm file does not match
anymore. Ideally the ref file 6.pcm should be updated but it is probably
not worth it including another copy of the same file, only smaller.
SIZE_TOLERANCE is added for this test case.
fate-webm-dash-chapters:
The original vorbis_chapter_extension_demo.ogg is transmuxed to dash-webm.
The ref file webm-dash-chapters needs to be updated.
fate-vorbis-encode:
This exposes another bug in the vorbis encoder that initial_padding is not
correctly set. It is fixed in the previous patch.
Signed-off-by: Guangyu Sun <gsun@roblox.com>
Call the scaler function directly rather than through a function
pointer. Drop the now-unused return value from ff_getSwsFunc() and
rename the function to reflect its new role.
This will be useful in the following commits, where it will become
important that the amount of output is different for scaled vs unscaled
case.
The twoloop coder is highly loaded with (pseudo-)perceptual metrics,
and the aim of the tests is to piece-wise test each function of the
encoder, for which the 'fast' coder is perfect, since it only decides
on which scalefactors to use, rather than enable or disable encoder
features.
Also use helper function to set the timestamp. Maybe we could also use
nanosecond precision, but there were some float rounding concerns.
Signed-off-by: Marton Balint <cus@passwd.hu>
Export them in UTC, not the local timezone. This way the output is
the same everywhere. The timezone information stored in the file is
still ignored, since there seems to be no simple way to export it
correctly.
Format them according to ISO 8601, which we generally use for exporting
dates.
Fixes fate-flv-demux, which was broken since
958bea5248 on some platforms.
There are no guarantees that all side data types have the same
representation on all platforms.
Tests that change output due to this:
id3v2-priv-remux, cover-art-mp3-id3v2-remux, gapless-mp3: SKIP_SAMPLES,
which is tested by fate-gapless-mp3-side-data
matroska-vp8-alpha-remux: MATROSKA_BLOCKADDITIONAL, which is tested by
remux itself (side data is written into output)
matroska-mastering-display-metadata: MASTERING_DISPLAY_METADATA and
CONTENT_LIGHT_LEVEL, which are tested by ffprobe invocation in the same
test
matroska-spherical-mono-remux: STEREO3D and SPHERICAL, which are tested
by ffprobe invocation in the same test
segment-mp4-to-ts: MPEGTS_STREAM_ID, which is tested by ts remuxing
tests
webm-webvtt-remux: WEBVTT_IDENTIFIER/SETTINGS, which is tested by the
ffprobe invocation in the same test
mxf-d10-user-comments: CPB_PROPERTIES, which is tested by mxf-probe-d10
mov-cover-image: SKIP_SAMPLES, which is tested for mov by
mov-aac-2048-priming
copy-trac3074: AUDIO_SERVICE_TYPE, which is tested by fate-hls-fmp4_ac3
This simply performs a 2nd pass if a LSE is encountered with GRAY8
Fixes: tickets/3933/128.jls
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Deprecated in c29038f304.
The resample filter based upon this library has been removed as well.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Deprecated in ddef3d902f.
(The reference file of the mov-zombie test needed to be updated, because
a rotate metadata tag is no longer exported; the side-data is of course
still present.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Announced in 2e8b0446c6.
Two FATE-tests needed to be updated because the checksums of
side data containing an AVCPBProperties struct changed.
buffer_size has also been switched to 64bits because it is a bitsize.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
The ASS margins are utilized to generate percentual values, as
the usage of cell-based sizing and offsetting seems to be not too
well supported by renderers.
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
Attempts to utilize the TTML cell resolution as a mapping to the
reference resolution, and maps font size to cell size. Additionally
sets the display and text alignment according to the ASS alignment
number.
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
This sadly required making changes to the code itself,
due to the same context needing to be reused for both versions.
The lookup table had to be duplicated for both versions.
When parsing ID3v2 tags, special (non-text) metadata is not applied
directly and unconditionally; instead it is stored in a linked list
in which elements are prepended. When traversing the list to add APICs
(or private tags) at the end, the order is reversed. The same also
happens for chapters and therefore the chapter parsing code already
reverses the chapters.
This commit changes this: By keeping pointers to both head and tail
of the linked list one can preserve the order of the entries and
remove the reordering code for chapters. Only the pointer to head
will be exported: No current caller uses a nonempty list, so exporting
both head and tail is unnecessary. This removes the functionality
to combine the lists of special metadata read from different ID3v2 tags,
but that doesn't make really much sense anyway (and would be trivial
to implement if desired) and allows to remove the now unnecessary
initializations performed by the callers.
The FATE-reference for the id3v2-priv test had to be updated
because the order of the tags read into the dict is reversed;
for id3v2-priv-remux only the md5 and not the ffprobe output
of the remuxed file changes because the order of the private tags
has up until now been reversed twice.
The references for the aiff/mp3 cover-art tests needed to be updated,
because the order of the attached pics is reversed upon reading.
It is still not correct, because the muxers write the pics in the order
in which they arrive at the muxer instead of the order given by
pkt->stream_index.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Notice that the order of the APIC tracks is currently wrong. This is
a superposition of two bugs: (i) Both muxers write the attached
pictures in the order they arrive in the muxer and not in the
stream_index order, leading to attached pictures that are copied being
written earlier because their timestamp is AV_NOPTS_VALUE, whereas the
timestamp of the encoded pictures is 0. (ii) A bug in the id3v2 parsing
code reverses the order of the parsed pictures.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Specifically test that the WebVTT flavour is correctly mapped to
the Matroska/WebM CodecID and back; and test that dispositions
unsupported by WebM are discarded even when they would be supported
by Matroska.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This makes av_read_frame() return packets with proper timestamps.
As a result, seeking now works in combination with streamcopy.
A FATE-test for this has been added.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The test sample has to have no file extension, otherwise probing
happens to work, based off file extension alone, and we want to
test the actual probing function.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Enables writing TTML documents or encoded TTML paragraphs as such
documents.
Additionally, a test for the combined TTML encoder and muxer has
been added to validate that the components still work.
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
Some FATE tests use files created by other FATE tests as input files;
this mostly affects the seek tests which use files from vsynth_lena as
well as acodec-pcm as input files. In order to make this possible the
temporary files of all the vsynth* and all acodec-pcm tests are kept.
Yet only a fraction of these files are actually used. This commit
changes this to only keep the files that are actually needed for other
tests. This reduces the size of the tests/data/fate folder after a full
FATE run from 2024727441B to 138739312B.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
It only got added recently, and the new name makes it consistent with
product_version_num in the next patch.
Signed-off-by: Marton Balint <cus@passwd.hu>
AVID streams - currently handled by the AVRN decoder - can be (depending
on extradata contents) either MJPEG or raw video. To decode the MJPEG
variant, the AVRN decoder currently instantiates a MJPEG decoder
internally and forwards decoded frames to the caller (possibly after
cropping them).
This is suboptimal, because the AVRN decoder does not forward all the
features of the internal MJPEG decoder, such as direct rendering.
Handling such forwarding in a full and generic manner would be quite
hard, so it is simpler to just handle those streams in the MJPEG decoder
directly.
The AVRN decoder, which now handles only the raw streams, can now be
marked as supporting direct rendering.
This also removes the last remaining internal use of the obsolete
decoding API.
The FF_API macros are private and must not be used by external callers.
As the fields in question are to be removed without replacement, just
drop them.
The fields are:
AVPacket.convergence_duration
AVCodecContext.time_base
AVCodecContext.timecode_frame_start
AV_PIX_FMT_FLAG_PSEUDOPAL pixel descriptor flag
This provides coverage for writing BlockGroups with BlockAdditional
and ReferenceBlock elements. It also tests setting the hearing impaired
disposition (it fits given that this video has no audio so one needs to
be able to read lips to understand anything).
Reviewed-by: Ridley Combs <rcombs@rcombs.me>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The FATE suite already contains a file containing mastering display
and content light level metadata: Meridian-Apple_ProResProxy-HDR10.mxf
This file is used to test both the Matroska muxer and demuxer.
Reviewed-by: Ridley Combs <rcombs@rcombs.me>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The md5 test up until now ignored errors from ffmpeg (the cli) and just
md5'ed whatever ffmpeg has output; while testing scenarios in which
ffmpeg fails has its merits, errors should not be overlooked by default;
doing so also reduces the effectiveness of sanitizers as errors from
them are ignored. This has happened with a memleak in the AV1 decoder.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The mxf_d10 muxer is very picky regarding the input it accepts:
The only video accepted is MPEG-2 with absolutely constant bitrate,
i.e. all packets need to have exactly the same size; and only a few
bitrates are accepted.
The sample file used did not abide by this: Writing the first packet
(a video packet) errors out and afterwards an audio packet from the
muxing queue has been written. That's all besides metadata (which this
test is about). The FFmpeg cli returned an error, but said error has
been ignored by the md5 test.
This commit changes the test to actually send a compliant stream to the
muxer, so that it does not error out; furthermore, the test is changed
to explicitly check the metadata instead of it only being implicitly
included in the md5 checksum. The compliant stream is created by our
encoder at runtime.
Finally, the test now also covers writing user-specified
product/company/version identification.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Also, test modifying colorspace properties and the default_mode
passthrough which is used here to create a file that has no default
track at all.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
It furthermore tests the demuxer's handling of chained SeekHeads,
level 1-elements after the Clusters and the muxer's capability of
writing huge TrackNumbers as well as expanding the Cues' length field
by one byte if necessary to fill the reserved space. It also tests
propagation of metadata.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
MJPEG does not have a single quantiser scale, so this does not fit into
the intended API use.
This removes the last use of the long-deprecated QP table API.