The output of this test is just a file containing the positions
of peaks; it is not a wave file and trying to demux it just
returns AVERROR_INVALIDDATA; said error has just been ignored
as the return value from do_avconv_crc is the return value from echo.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavc/v210_enc.o
and also allows to inline ff_v210enc_init() irrespectively of
interposing.
This dependency pulled basically all of libavcodec into checkasm,
in particular all codecs.
This also makes checkasm work when using shared Windows builds:
On Windows, it needs to be known to the compiler whether a data
symbol is external to the library/executable or not; hence the
need for av_export_avutil. checkasm needs access to the internals
of the libraries it tests and is therefore linked statically to all
the libraries. This means that the users of avpriv_cga_font and
avpriv_vga16_font in libavcodec (namely ansi.o, bintext.o, tmv.o)
end up in the same executable as the symbols, although they have
been compiled as if these symbols were external, leading to linker
errors. With this commit said files are discarded by the linker,
bypassing this problem.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavc/v210_dec.o
and also allows to inline ff_v210dec_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_threshold.o
and also allows to inline ff_threshold_init() irrespectively of
interposing.
With this patch checkasm no longer pulls all of lavfi and lavf in.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_nlmeans.o
and also allows to inline ff_nlmeans_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_hflip.o
and also allows to inline ff_hflip_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_gblur.o
and also allows to inline ff_gblur_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_blend.o
and also allows to inline ff_blend_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Only the AudioFIRDSPContext and the functions for its initialization
are needed outside of lavfi/af_afir.c.
Also rename the header to af_afirdsp.h to reflect the change.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Accidentally resurrected in fc49f22c3b
and 7711f19eda,
forgotten in 6ebc71847e and
1a6a088f7c or never needed
(filter-aemphasis).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It seems as if it was intended to declare fate-gif-color as prerequisite
of the fate-gifenc% tests. Yet the latter do not need anything from
the former, so this would be unnecessary. Furthermore, given that this
line has no associated recipe, it actually cancels implicit rules for
fate-gifenc% instead of adding a prerequisite.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
These tests have basically nothing to do with VPX (they do not even
require the decoder).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The tests in concatdec.mak reuse files created by tests
from lavf-container. Therefore these tests have the other tests
as prerequisite and mostly duplicate their CONFIG-requirements.
(The mxf_d10 tests did it incorrect as they only required
the MXF muxer.) This duplication is of course bad as usual,
so stop it by using the corresponding variable
that contains the non-lavf-container-tests that are enabled
to filter out all the concat-tests without a corresponding enabled
non-concat test.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
These changes are automatically inherited by the fate-seek-tests
based upon lavf-audio.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The new requirements are also automatically inherited
by the FATE_SEEK_LAVF_VIDEO seek-tests.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This automatically fixes the requirements of the fate-seek-acodec*
tests (e.g. 16 of the 27 such tests are now automatically disabled
if the aresample filter is disabled).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This automatically fixes the requirements of the fate-seek-vsynth*
tests (e.g. 16 of the 49 such tests are now automatically disabled
if the scale filter is disabled).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
If one uses a -s command, a scale filter is inserted
even when doing so is redundant. This patch stops
doing so. This makes the tests that don't need libswscale
actually succeed in case it is disabled (only 315 of 470 tests
need it).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Most of the tests in seek.mak use files created by other tests
as input. Therefore these tests have the other tests as prerequisite
and duplicate their CONFIG-requirements. This duplication is of course
bad as usual, so stop it by using the corresponding variable
that contains the non-seek-tests that are enabled to filter out all
the seek-tests without a corresponding enabled non-seek test.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The output files of the lavf tests are highly regular,
allowing to use rules for the src files instead of a list.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Each of the intermediately generated lena-*.fits files is only used
for exactly one test; so it could be deleted right after the test.
Switching to a transcode test (which is also more natural) achieves
this. It also adds checksums of the intermediate files to the ref-file.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In particular, add the missing dependency on the scale and
aresample filters (and therefore on libswscale resp. libswresample).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In particular, add the missing dependency on the scale filter
(and therefore on libswscale).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Intended for scenarios that currently use DEMDEC, but are missing
the requirements that are implicitly needed by framecrc.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Add a parameter that allows to add additional requirements.
Also add FILE_PROTOCOL to all the auxiliary functions
that use a demuxer.
Also fix the requirements for the fate-mpegts-probe-(latm|program)
tests. They have misused DEMDEC.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In particular, add the missing dependency on the scale filter
(and therefore on libswscale).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also fix the requirements of fate-mov-channel-description:
It needs the pcm_s16le decoder and the mov demuxer.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
And drop the FATE_CAF_REMUX variables which only existed
to avoid having to repeat the common FILE_PROTOCOL PIPE_PROTOCOL
FRAMECRC_MUXER stuff.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It also adds the missing depenencies on the file and pipe protocols
and the framecrc muxer.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Tests using the transcode and stream_remux functions have some common
requirements (namely the file and pipe protocols as well as the framecrc
muxer) and also other commonalities: The create a file and read it
immediately afterwards, so that they typically rely on a corresponding
muxer+demuxer pair which typically shares the same name; for transcode
(if it does not use stream copy) the same is true for encoders and
decoders. This means that using special Makefile-functions instead
of the general ALLYES is worthwhile. This commit adds such functions.
These functions allow to add arbitrary CONFIG-checks on top of the
aforementioned ones in order to satisfy special needs (for e.g. parsers,
filters) that several intended users have.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The test also requires a png decoder, which often can be disabled in
cross building setups, where zlib might be missing.
Signed-off-by: Martin Storsjö <martin@martin.st>
This is mostly straightforward. The major complication is that, as a
result of the 16-bit chunk size limitation, ICC profiles may need to be
split up into multiple chunks.
We also need to make sure to allocate enough extra space in the packet
to fit the ICC profile, so modify both mpegvideo_enc.c and ljpegenc.c to
take into account this extra overhead, failing cleanly if necessary.
Also add a FATE transcode test to ensure that the ICC profile gets
written (and read) correctly. Note that this ICC profile is smaller than
64 kB, so this doesn't test the APP2 chunk re-arranging code at all.
Signed-off-by: Niklas Haas <git@haasn.dev>
We re-use the PNGEncContext.zstream for deflate-related operations.
Other than that, the code is pretty straightforward. Special care needs
to be taken to avoid writing more than 79 characters of the profile
description (the maximum supported).
To write the (dynamically sized) deflate-encoded data, we allocate extra
space in the packet and use that directly as a scratch buffer. Modify
png_write_chunk slightly to allow pre-writing the chunk contents like
this.
Also add a FATE transcode test to ensure that the ICC profile gets
encoded correctly.
Signed-off-by: Niklas Haas <git@haasn.dev>
On empty input the awk script was always successful which caused the
filter-refcmp tests to always succeed.
Also fix the command lines for refcmp_metadata compare function because it
needs auto conversion filters, and update reference of test
filter-refcmp-psnr-rgb because it was missed in
a7fc78c1a6 but was never noticed due to the
original issue...
Signed-off-by: Marton Balint <cus@passwd.hu>
Calculate Spatial Info (SI) and Temporal Info (TI) scores for a video, as defined
in ITU-T P.910: Subjective video quality assessment methods for multimedia
applications.
This test deliberately doesn't exercise the full range of inputs described in
the committee draft VC-1 standard. It says:
input coefficients in frequency domain, D, satisfy -2048 <= D < 2047
intermediate coefficients, E, satisfy -4096 <= E < 4095
fully inverse-transformed coefficients, R, satisfy -512 <= R < 511
For one thing, the inequalities look odd. Did they mean them to go the
other way round? That would make more sense because the equations generally
both add and subtract coefficients multiplied by constants, including powers
of 2. Requiring the most-negative values to be valid extends the number of
bits to represent the intermediate values just for the sake of that one case!
For another thing, the extreme values don't look to occur in real streams -
both in my experience and supported by the following comment in the AArch32
decoder:
tNhalf is half of the value of tN (as described in vc1_inv_trans_8x8_c).
This is done because sometimes files have input that causes tN + tM to
overflow. To avoid this overflow, we compute tNhalf, then compute
tNhalf + tM (which doesn't overflow), and then we use vhadd to compute
(tNhalf + (tNhalf + tM)) >> 1 which does not overflow because it is
one instruction.
My AArch64 decoder goes further than this. It calculates tNhalf and tM
then does an SRA (essentially a fused halve and add) to compute
(tN + tM) >> 1 without ever having to hold (tNhalf + tM) in a 16-bit element
without overflowing. It only encounters difficulties if either tNhalf or
tM overflow in isolation.
I haven't had sight of the final standard, so it's possible that these
issues were dealt with during finalisation, which could explain the lack
of usage of extreme inputs in real streams. Or a preponderance of decoders
that only support 16-bit intermediate values in their inverse transforms
might have caused encoders to steer clear of such cases.
I have effectively followed this approach in the test, and limited the
scale of the coefficients sufficient that both the existing AArch32 decoder
and my new AArch64 decoder both pass.
Signed-off-by: Ben Avison <bavison@riscosopen.org>
Signed-off-by: Martin Storsjö <martin@martin.st>
Note that the benchmarking results for these functions are highly dependent
upon the input data. Therefore, each function is benchmarked twice,
corresponding to the best and worst case complexity of the reference C
implementation. The performance of a real stream decode will fall somewhere
between these two extremes.
Signed-off-by: Ben Avison <bavison@riscosopen.org>
Signed-off-by: Martin Storsjö <martin@martin.st>
tiny_ssim is built for the build host, not for the target platform.
Therefore, it mustn't include the config.h header, which is set up
specifically for the target platform and compiler.
This fixes cross building for older WinStore platforms, where
config.h contains "#define getenv(x) NULL".
Signed-off-by: Martin Storsjö <martin@martin.st>
This avoids unnecessary rebuilds of most source files if only the
list of enabled components has changed, but not the other properties
of the build, set in config.h.
Signed-off-by: Martin Storsjö <martin@martin.st>
The range parameters need to be set up before calling
sws_init_context (which selects which fastpaths can be used;
this gets called by sws_getContext); solely passing them via
sws_setColorspaceDetails isn't enough.
This fixes producing full range YUV range output when doing
YUV->YUV conversions between different YUV color spaces.
Signed-off-by: Martin Storsjö <martin@martin.st>
The IMF demuxer does not set the DTS and PTS of packets accurately in all
scenarios. Moreover, audio packets are not trimmed when they exceed the
duration of the underlying resource.
imf-cpl-with-repeat FATE ref file is regenerated.
Addresses https://trac.ffmpeg.org/ticket/9611
The sample mpeg4/mpeg4_sstp_dpcm.m4v existed in the FATE-suite,
but it was surprisingly unused.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This long-existing feature calculates subtitle durations by keeping
it around until the following subtitle is decoded, and then utilizes
the following subtitle's pts as the end point of the previous one.
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
Peeking into the muxing queue can improve the estimate of
the lowest timestamp needed for avoid_negative_ts in case
the lowest timestamp is in a packet other than the first packet
to be muxed.
This fixes tickets #4536 and #5784 as well as the output from
the matroska-avoid-negative-ts FATE-test.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
write_packet() has code to shift the packets timestamps
to make them nonnegative or even make them start at ts zero;
this code inspects every packet that is written and if a packet
with negative timestamp (whether this is dts or pts depends upon
another flag; basically: Matroska uses pts, everyone else dts)
is encountered, this is offset to make the timestamp zero.
All further packets will be offset accordingly (with the offset
converted according to the streams' timebases).
This is based around an assumption, namely that the timestamps
are indeed non-decreasing, so that the first packet with negative
timestamps is the first packet with timestamps. This assumption
is often fulfilled given that the default interleavement function
by default interleaves per dts; yet there are scenarios in which
it may not be fulfilled:
a) av_write_frame() instead of av_interleaved_write_frame() is used.
b) The audio_preload option is used.
c) When the timestamps that are made nonnegative/zero are pts
(i.e. with Matroska), because the packet with the smallest dts
is not necessarily the packet with the smallest pts.
d) Possibly with custom interleavement functions.
In these cases the relative sync of the first few packet(s) is offset
relative to the later packets. This contradicts the documentation
("When shifting is enabled, all output timestamps are shifted by
the same amount").
Therefore this commit changes this: As soon as the first packet
with valid timestamps is output, it is checked and recorded whether
the timestamps need to be shifted. Further packets are no longer
checked for needing to be offset; instead they are simply offset.
In the cases above this leads to packets with negative timestamps
(and the appropriate warnings) instead of desync. This will mostly
be fixed in the next commit.
This commit also factors handling the avoid_negative_ts stuff out
of write_packet() in order to be able to return immediately.
Tickets #4536 and #5784 as well as the matroska-avoid-negative-ts-test
are examples of c); as has been said, some timestamps are now negative,
yet the ref file update does not show it because ffmpeg.c sanitizes
the timestamps (-copyts disables it; ffprobe and mkvinfo also show
the original timestamps).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This tests the issue from tickets #4536, #5784;
the output of this test is currently broken.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Tests the parsing and writing of AVDOVIDecoderConfigurationRecord,
when it is present as a Dolby Vision configuration block addition mapping.
Signed-off-by: quietvoid <tcChlisop0@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Up until now, the WebM variant of WebVTT subtitles has been handled
specially: It had its own function to write it, because the data
had to be reformatted before writing. But given that other codecs
also need reformatting, this is no good reason to also duplicate the
generic stuff for writing Block(Group)s.
This commit therefore uses an ordinary reformatting function for
this task; writing WebVTT subtitles now uses the generic code
and therefore automatically uses the least amount of bytes
for its BlockGroup length fields whereas the earlier code used
an overestimation for the length of the Duration element.
This is the reason for the changes to the webm-webvtt-remux FATE-test.
(This commit does not implement support for Matroska's way of muxing
WebVTT; it also does not add checks to ensure that WebM-style subtitles
don't get muxed in Matroska. But the function for reformatting gets a
webm prefix to indicate that this is for WebM.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This commit uses the new EbmlWriter API to write the length fields
of the BlockGroup and its descendants that are themselves Master
elements (namely BlockAdditions and BlockMore) on the least amount of
bytes.
This fixes regressions introduced when the special code for writing
general subtitles was removed. Accordingly, the binsub-mksenc and
matroska-zero-length-block FATE-tests have now been reverted back
to their old state again; the advantages of this approach are evident
with the matroska-vp8-alpha-remux test which up until now wrote
all the length fields of all BlockGroups, BlockAdditions and BlockMore
on eight bytes.
Using the EbmlWriter API also allowed to improve locality in
mkv_write_block(): E.g. both DiscardPadding as well as the
BlockAdditional side-data are now directly used to add elements
to the writer whereas the earlier code had to first check
for whether a BlockGroup should be used and then check again
(after the place where a BlockGroup would be opened if one were
used) for whether there is DiscardPadding or BlockAdditional
side-data to write.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Once upon a time, mkv_write_block() only wrote a (Simple)Block,
not a BlockGroup which is needed for subtitles to convey
the duration. But with the introduction of support for writing
BlockAdditions and DiscardPadding (both of which require a BlockGroup),
mkv_write_block() can also open and close a BlockGroup of its own. This
naturally led to some code duplication which is removed in this commit.
This new code leads to one regression: It always uses eight bytes for
the BlockGroup's length field, whereas the earlier code usually used the
lowest amount of bytes needed. This will be fixed in a future commit.
This temporary regression is also the reason for changes to the
binsub-mksenc and matroska-zero-length-block fate tests.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also check the (user-provided) tags for being overlong; the earlier
code had an implicit unchecked size_t->int conversion.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
To trigger this bug, use `paletteuse=dither=bayer:bayer_scale=0`; you will see
that adjacent pixel lines will use the same dither pattern, instead of being
shifted from each other by 32 units (0x20).
One way to demostrate the bug is:
$ convert -size 64x256 gradient:black-white -rotate 270 grad.png
$ echo 'P2 2 1 255 0 255' > bw.pnm
$ ffmpeg -i grad.png -filter_complex 'movie=bw.pnm,scale=256x1[bw]; [0:v][bw]paletteuse=dither=bayer:bayer_scale=0' gradbw.png
Previously: https://www.rm.cloudns.org/img/uploaded/0bd152c11b9cd99e5945115534b1bdde.png
Now: https://www.rm.cloudns.org/img/uploaded/89caaa5e36c38bc2c01755b30811f969.png
This was caused by passing inconsistent color vs (a,r,g,b) parameters to
color_get(), and NBITS being 5 meaning actually hitting the same cache node
does happen in this case, but ONLY if bayer_scale is zero.
The fix is passing the correct color value to color_get().
Also added a previous-failing FATE test; image comparison of the first frame:
Previously: https://www.rm.cloudns.org/img/uploaded/d0ff9db8d8a7d8a3b8b88bbe92bf5fed.png
Now: https://www.rm.cloudns.org/img/uploaded/a72389707e719b5cd1c58916a9e79ca8.png
(on this less synthetic test image, the bug basically causes noise from cache
hits vs misses)
Tested: FATE passes, which exercises this filter but at the default bayer_scale.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
This is similar to the faststart option of the mov muxer, yet
in contrast to it it works together with reserve_index_space
(the equivalent to reserved_moov_size): If the reserved space
does not suffice, the data is shifted; if not, the Cues are
written at the front without shifting the data.
Several tests that cover (not only) this have been added.
Implements #7017.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It returns a pointer inside the fifo's buffer, which cannot be safely
used without accessing AVFifoBuffer internals. It is easier and safer to
use av_fifo_generic_peek_at().
mvhd and tkhd present the post-editlist duration, while mdhd should
have the pre-editlist duration. Regression since c2424b1f3.
Signed-off-by: Martin Storsjö <martin@martin.st>
All the AMRWB samples are in a mov container.
Also use FATE_SAMPLES_FFMPEG instead of FATE_SAMPLES_AVCONV.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
- No longer mixes u8 and u16 component accesses (this was UB)
- De-duplicated 8->16 conversion
- De-duplicated component -> plane+offset conversion
- De-duplicated planar + packed RGB
- No longer calls ff_fill_rgba_map
- Removed redundant comp_mask data member
- RGB0 and related formats no longer write an alpha value to the 0 byte
- Non-planar YA formats now work correctly
- High-bit-depth semi-planar YUV now works correctly
And expose the parsed values as frame side data. Update FATE results to
match.
It's worth documenting that this relies on the dovi configuration record
being present on the first AVPacket fed to the decoder, which in
practice is the case if if the API user has called something like
av_format_inject_global_side_data, which is unfortunately not the
default.
This commit is not the time and place to change that behavior, though.
Signed-off-by: Niklas Haas <git@haasn.dev>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
To avoid the ref for this growing to a very large size when attaching
the parsed RPU side data. Since this sample does not have any dynamic
metadata, two frames will serve just as well as 100.
Signed-off-by: Niklas Haas <git@haasn.dev>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>