filter-pp and filter-pp7 are the only ones of the filter-pp* tests
that use the file generated by fate-vsynth1-mpeg4-qprd.
Also combine the dependency on this test for all the tests that need it.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The temporary fate-lavf files can easily be removed
if they are not needed as inputs for other tests (mainly
fate-seek-tests). This commit implements this.
The size of the remaining files decreases from 260890083B
to 79481793B.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Extend the ordinary mechanism to signal KEEP for this.
This also allows to remove the keep-parameter from enc_dec,
transcode and stream_remux, so that several empty parameters
'""' could be removed.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
These CRC-only files (the output of the CRC-muxer) are only used once,
so they need not be preserved. Furthermore, errors from ffmpeg (used
for creating the CRC) are no longer ignored with this patch.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The output of this test is just a file containing the positions
of peaks; it is not a wave file and trying to demux it just
returns AVERROR_INVALIDDATA; said error has just been ignored
as the return value from do_avconv_crc is the return value from echo.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavc/v210_enc.o
and also allows to inline ff_v210enc_init() irrespectively of
interposing.
This dependency pulled basically all of libavcodec into checkasm,
in particular all codecs.
This also makes checkasm work when using shared Windows builds:
On Windows, it needs to be known to the compiler whether a data
symbol is external to the library/executable or not; hence the
need for av_export_avutil. checkasm needs access to the internals
of the libraries it tests and is therefore linked statically to all
the libraries. This means that the users of avpriv_cga_font and
avpriv_vga16_font in libavcodec (namely ansi.o, bintext.o, tmv.o)
end up in the same executable as the symbols, although they have
been compiled as if these symbols were external, leading to linker
errors. With this commit said files are discarded by the linker,
bypassing this problem.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavc/v210_dec.o
and also allows to inline ff_v210dec_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_threshold.o
and also allows to inline ff_threshold_init() irrespectively of
interposing.
With this patch checkasm no longer pulls all of lavfi and lavf in.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_nlmeans.o
and also allows to inline ff_nlmeans_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_hflip.o
and also allows to inline ff_hflip_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_gblur.o
and also allows to inline ff_gblur_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_blend.o
and also allows to inline ff_blend_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Only the AudioFIRDSPContext and the functions for its initialization
are needed outside of lavfi/af_afir.c.
Also rename the header to af_afirdsp.h to reflect the change.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Accidentally resurrected in fc49f22c3b
and 7711f19eda,
forgotten in 6ebc71847e and
1a6a088f7c or never needed
(filter-aemphasis).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It seems as if it was intended to declare fate-gif-color as prerequisite
of the fate-gifenc% tests. Yet the latter do not need anything from
the former, so this would be unnecessary. Furthermore, given that this
line has no associated recipe, it actually cancels implicit rules for
fate-gifenc% instead of adding a prerequisite.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
These tests have basically nothing to do with VPX (they do not even
require the decoder).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The tests in concatdec.mak reuse files created by tests
from lavf-container. Therefore these tests have the other tests
as prerequisite and mostly duplicate their CONFIG-requirements.
(The mxf_d10 tests did it incorrect as they only required
the MXF muxer.) This duplication is of course bad as usual,
so stop it by using the corresponding variable
that contains the non-lavf-container-tests that are enabled
to filter out all the concat-tests without a corresponding enabled
non-concat test.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
These changes are automatically inherited by the fate-seek-tests
based upon lavf-audio.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The new requirements are also automatically inherited
by the FATE_SEEK_LAVF_VIDEO seek-tests.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This automatically fixes the requirements of the fate-seek-acodec*
tests (e.g. 16 of the 27 such tests are now automatically disabled
if the aresample filter is disabled).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This automatically fixes the requirements of the fate-seek-vsynth*
tests (e.g. 16 of the 49 such tests are now automatically disabled
if the scale filter is disabled).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
If one uses a -s command, a scale filter is inserted
even when doing so is redundant. This patch stops
doing so. This makes the tests that don't need libswscale
actually succeed in case it is disabled (only 315 of 470 tests
need it).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Most of the tests in seek.mak use files created by other tests
as input. Therefore these tests have the other tests as prerequisite
and duplicate their CONFIG-requirements. This duplication is of course
bad as usual, so stop it by using the corresponding variable
that contains the non-seek-tests that are enabled to filter out all
the seek-tests without a corresponding enabled non-seek test.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The output files of the lavf tests are highly regular,
allowing to use rules for the src files instead of a list.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Each of the intermediately generated lena-*.fits files is only used
for exactly one test; so it could be deleted right after the test.
Switching to a transcode test (which is also more natural) achieves
this. It also adds checksums of the intermediate files to the ref-file.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In particular, add the missing dependency on the scale and
aresample filters (and therefore on libswscale resp. libswresample).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In particular, add the missing dependency on the scale filter
(and therefore on libswscale).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Intended for scenarios that currently use DEMDEC, but are missing
the requirements that are implicitly needed by framecrc.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Add a parameter that allows to add additional requirements.
Also add FILE_PROTOCOL to all the auxiliary functions
that use a demuxer.
Also fix the requirements for the fate-mpegts-probe-(latm|program)
tests. They have misused DEMDEC.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In particular, add the missing dependency on the scale filter
(and therefore on libswscale).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also fix the requirements of fate-mov-channel-description:
It needs the pcm_s16le decoder and the mov demuxer.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
And drop the FATE_CAF_REMUX variables which only existed
to avoid having to repeat the common FILE_PROTOCOL PIPE_PROTOCOL
FRAMECRC_MUXER stuff.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It also adds the missing depenencies on the file and pipe protocols
and the framecrc muxer.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Tests using the transcode and stream_remux functions have some common
requirements (namely the file and pipe protocols as well as the framecrc
muxer) and also other commonalities: The create a file and read it
immediately afterwards, so that they typically rely on a corresponding
muxer+demuxer pair which typically shares the same name; for transcode
(if it does not use stream copy) the same is true for encoders and
decoders. This means that using special Makefile-functions instead
of the general ALLYES is worthwhile. This commit adds such functions.
These functions allow to add arbitrary CONFIG-checks on top of the
aforementioned ones in order to satisfy special needs (for e.g. parsers,
filters) that several intended users have.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The test also requires a png decoder, which often can be disabled in
cross building setups, where zlib might be missing.
Signed-off-by: Martin Storsjö <martin@martin.st>
This is mostly straightforward. The major complication is that, as a
result of the 16-bit chunk size limitation, ICC profiles may need to be
split up into multiple chunks.
We also need to make sure to allocate enough extra space in the packet
to fit the ICC profile, so modify both mpegvideo_enc.c and ljpegenc.c to
take into account this extra overhead, failing cleanly if necessary.
Also add a FATE transcode test to ensure that the ICC profile gets
written (and read) correctly. Note that this ICC profile is smaller than
64 kB, so this doesn't test the APP2 chunk re-arranging code at all.
Signed-off-by: Niklas Haas <git@haasn.dev>
We re-use the PNGEncContext.zstream for deflate-related operations.
Other than that, the code is pretty straightforward. Special care needs
to be taken to avoid writing more than 79 characters of the profile
description (the maximum supported).
To write the (dynamically sized) deflate-encoded data, we allocate extra
space in the packet and use that directly as a scratch buffer. Modify
png_write_chunk slightly to allow pre-writing the chunk contents like
this.
Also add a FATE transcode test to ensure that the ICC profile gets
encoded correctly.
Signed-off-by: Niklas Haas <git@haasn.dev>
On empty input the awk script was always successful which caused the
filter-refcmp tests to always succeed.
Also fix the command lines for refcmp_metadata compare function because it
needs auto conversion filters, and update reference of test
filter-refcmp-psnr-rgb because it was missed in
a7fc78c1a6 but was never noticed due to the
original issue...
Signed-off-by: Marton Balint <cus@passwd.hu>
Calculate Spatial Info (SI) and Temporal Info (TI) scores for a video, as defined
in ITU-T P.910: Subjective video quality assessment methods for multimedia
applications.
This test deliberately doesn't exercise the full range of inputs described in
the committee draft VC-1 standard. It says:
input coefficients in frequency domain, D, satisfy -2048 <= D < 2047
intermediate coefficients, E, satisfy -4096 <= E < 4095
fully inverse-transformed coefficients, R, satisfy -512 <= R < 511
For one thing, the inequalities look odd. Did they mean them to go the
other way round? That would make more sense because the equations generally
both add and subtract coefficients multiplied by constants, including powers
of 2. Requiring the most-negative values to be valid extends the number of
bits to represent the intermediate values just for the sake of that one case!
For another thing, the extreme values don't look to occur in real streams -
both in my experience and supported by the following comment in the AArch32
decoder:
tNhalf is half of the value of tN (as described in vc1_inv_trans_8x8_c).
This is done because sometimes files have input that causes tN + tM to
overflow. To avoid this overflow, we compute tNhalf, then compute
tNhalf + tM (which doesn't overflow), and then we use vhadd to compute
(tNhalf + (tNhalf + tM)) >> 1 which does not overflow because it is
one instruction.
My AArch64 decoder goes further than this. It calculates tNhalf and tM
then does an SRA (essentially a fused halve and add) to compute
(tN + tM) >> 1 without ever having to hold (tNhalf + tM) in a 16-bit element
without overflowing. It only encounters difficulties if either tNhalf or
tM overflow in isolation.
I haven't had sight of the final standard, so it's possible that these
issues were dealt with during finalisation, which could explain the lack
of usage of extreme inputs in real streams. Or a preponderance of decoders
that only support 16-bit intermediate values in their inverse transforms
might have caused encoders to steer clear of such cases.
I have effectively followed this approach in the test, and limited the
scale of the coefficients sufficient that both the existing AArch32 decoder
and my new AArch64 decoder both pass.
Signed-off-by: Ben Avison <bavison@riscosopen.org>
Signed-off-by: Martin Storsjö <martin@martin.st>
Note that the benchmarking results for these functions are highly dependent
upon the input data. Therefore, each function is benchmarked twice,
corresponding to the best and worst case complexity of the reference C
implementation. The performance of a real stream decode will fall somewhere
between these two extremes.
Signed-off-by: Ben Avison <bavison@riscosopen.org>
Signed-off-by: Martin Storsjö <martin@martin.st>
tiny_ssim is built for the build host, not for the target platform.
Therefore, it mustn't include the config.h header, which is set up
specifically for the target platform and compiler.
This fixes cross building for older WinStore platforms, where
config.h contains "#define getenv(x) NULL".
Signed-off-by: Martin Storsjö <martin@martin.st>
This avoids unnecessary rebuilds of most source files if only the
list of enabled components has changed, but not the other properties
of the build, set in config.h.
Signed-off-by: Martin Storsjö <martin@martin.st>
The range parameters need to be set up before calling
sws_init_context (which selects which fastpaths can be used;
this gets called by sws_getContext); solely passing them via
sws_setColorspaceDetails isn't enough.
This fixes producing full range YUV range output when doing
YUV->YUV conversions between different YUV color spaces.
Signed-off-by: Martin Storsjö <martin@martin.st>
The IMF demuxer does not set the DTS and PTS of packets accurately in all
scenarios. Moreover, audio packets are not trimmed when they exceed the
duration of the underlying resource.
imf-cpl-with-repeat FATE ref file is regenerated.
Addresses https://trac.ffmpeg.org/ticket/9611
The sample mpeg4/mpeg4_sstp_dpcm.m4v existed in the FATE-suite,
but it was surprisingly unused.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This long-existing feature calculates subtitle durations by keeping
it around until the following subtitle is decoded, and then utilizes
the following subtitle's pts as the end point of the previous one.
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
Peeking into the muxing queue can improve the estimate of
the lowest timestamp needed for avoid_negative_ts in case
the lowest timestamp is in a packet other than the first packet
to be muxed.
This fixes tickets #4536 and #5784 as well as the output from
the matroska-avoid-negative-ts FATE-test.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
write_packet() has code to shift the packets timestamps
to make them nonnegative or even make them start at ts zero;
this code inspects every packet that is written and if a packet
with negative timestamp (whether this is dts or pts depends upon
another flag; basically: Matroska uses pts, everyone else dts)
is encountered, this is offset to make the timestamp zero.
All further packets will be offset accordingly (with the offset
converted according to the streams' timebases).
This is based around an assumption, namely that the timestamps
are indeed non-decreasing, so that the first packet with negative
timestamps is the first packet with timestamps. This assumption
is often fulfilled given that the default interleavement function
by default interleaves per dts; yet there are scenarios in which
it may not be fulfilled:
a) av_write_frame() instead of av_interleaved_write_frame() is used.
b) The audio_preload option is used.
c) When the timestamps that are made nonnegative/zero are pts
(i.e. with Matroska), because the packet with the smallest dts
is not necessarily the packet with the smallest pts.
d) Possibly with custom interleavement functions.
In these cases the relative sync of the first few packet(s) is offset
relative to the later packets. This contradicts the documentation
("When shifting is enabled, all output timestamps are shifted by
the same amount").
Therefore this commit changes this: As soon as the first packet
with valid timestamps is output, it is checked and recorded whether
the timestamps need to be shifted. Further packets are no longer
checked for needing to be offset; instead they are simply offset.
In the cases above this leads to packets with negative timestamps
(and the appropriate warnings) instead of desync. This will mostly
be fixed in the next commit.
This commit also factors handling the avoid_negative_ts stuff out
of write_packet() in order to be able to return immediately.
Tickets #4536 and #5784 as well as the matroska-avoid-negative-ts-test
are examples of c); as has been said, some timestamps are now negative,
yet the ref file update does not show it because ffmpeg.c sanitizes
the timestamps (-copyts disables it; ffprobe and mkvinfo also show
the original timestamps).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This tests the issue from tickets #4536, #5784;
the output of this test is currently broken.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Tests the parsing and writing of AVDOVIDecoderConfigurationRecord,
when it is present as a Dolby Vision configuration block addition mapping.
Signed-off-by: quietvoid <tcChlisop0@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Up until now, the WebM variant of WebVTT subtitles has been handled
specially: It had its own function to write it, because the data
had to be reformatted before writing. But given that other codecs
also need reformatting, this is no good reason to also duplicate the
generic stuff for writing Block(Group)s.
This commit therefore uses an ordinary reformatting function for
this task; writing WebVTT subtitles now uses the generic code
and therefore automatically uses the least amount of bytes
for its BlockGroup length fields whereas the earlier code used
an overestimation for the length of the Duration element.
This is the reason for the changes to the webm-webvtt-remux FATE-test.
(This commit does not implement support for Matroska's way of muxing
WebVTT; it also does not add checks to ensure that WebM-style subtitles
don't get muxed in Matroska. But the function for reformatting gets a
webm prefix to indicate that this is for WebM.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This commit uses the new EbmlWriter API to write the length fields
of the BlockGroup and its descendants that are themselves Master
elements (namely BlockAdditions and BlockMore) on the least amount of
bytes.
This fixes regressions introduced when the special code for writing
general subtitles was removed. Accordingly, the binsub-mksenc and
matroska-zero-length-block FATE-tests have now been reverted back
to their old state again; the advantages of this approach are evident
with the matroska-vp8-alpha-remux test which up until now wrote
all the length fields of all BlockGroups, BlockAdditions and BlockMore
on eight bytes.
Using the EbmlWriter API also allowed to improve locality in
mkv_write_block(): E.g. both DiscardPadding as well as the
BlockAdditional side-data are now directly used to add elements
to the writer whereas the earlier code had to first check
for whether a BlockGroup should be used and then check again
(after the place where a BlockGroup would be opened if one were
used) for whether there is DiscardPadding or BlockAdditional
side-data to write.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Once upon a time, mkv_write_block() only wrote a (Simple)Block,
not a BlockGroup which is needed for subtitles to convey
the duration. But with the introduction of support for writing
BlockAdditions and DiscardPadding (both of which require a BlockGroup),
mkv_write_block() can also open and close a BlockGroup of its own. This
naturally led to some code duplication which is removed in this commit.
This new code leads to one regression: It always uses eight bytes for
the BlockGroup's length field, whereas the earlier code usually used the
lowest amount of bytes needed. This will be fixed in a future commit.
This temporary regression is also the reason for changes to the
binsub-mksenc and matroska-zero-length-block fate tests.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also check the (user-provided) tags for being overlong; the earlier
code had an implicit unchecked size_t->int conversion.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
To trigger this bug, use `paletteuse=dither=bayer:bayer_scale=0`; you will see
that adjacent pixel lines will use the same dither pattern, instead of being
shifted from each other by 32 units (0x20).
One way to demostrate the bug is:
$ convert -size 64x256 gradient:black-white -rotate 270 grad.png
$ echo 'P2 2 1 255 0 255' > bw.pnm
$ ffmpeg -i grad.png -filter_complex 'movie=bw.pnm,scale=256x1[bw]; [0:v][bw]paletteuse=dither=bayer:bayer_scale=0' gradbw.png
Previously: https://www.rm.cloudns.org/img/uploaded/0bd152c11b9cd99e5945115534b1bdde.png
Now: https://www.rm.cloudns.org/img/uploaded/89caaa5e36c38bc2c01755b30811f969.png
This was caused by passing inconsistent color vs (a,r,g,b) parameters to
color_get(), and NBITS being 5 meaning actually hitting the same cache node
does happen in this case, but ONLY if bayer_scale is zero.
The fix is passing the correct color value to color_get().
Also added a previous-failing FATE test; image comparison of the first frame:
Previously: https://www.rm.cloudns.org/img/uploaded/d0ff9db8d8a7d8a3b8b88bbe92bf5fed.png
Now: https://www.rm.cloudns.org/img/uploaded/a72389707e719b5cd1c58916a9e79ca8.png
(on this less synthetic test image, the bug basically causes noise from cache
hits vs misses)
Tested: FATE passes, which exercises this filter but at the default bayer_scale.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
This is similar to the faststart option of the mov muxer, yet
in contrast to it it works together with reserve_index_space
(the equivalent to reserved_moov_size): If the reserved space
does not suffice, the data is shifted; if not, the Cues are
written at the front without shifting the data.
Several tests that cover (not only) this have been added.
Implements #7017.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It returns a pointer inside the fifo's buffer, which cannot be safely
used without accessing AVFifoBuffer internals. It is easier and safer to
use av_fifo_generic_peek_at().
mvhd and tkhd present the post-editlist duration, while mdhd should
have the pre-editlist duration. Regression since c2424b1f3.
Signed-off-by: Martin Storsjö <martin@martin.st>
All the AMRWB samples are in a mov container.
Also use FATE_SAMPLES_FFMPEG instead of FATE_SAMPLES_AVCONV.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
- No longer mixes u8 and u16 component accesses (this was UB)
- De-duplicated 8->16 conversion
- De-duplicated component -> plane+offset conversion
- De-duplicated planar + packed RGB
- No longer calls ff_fill_rgba_map
- Removed redundant comp_mask data member
- RGB0 and related formats no longer write an alpha value to the 0 byte
- Non-planar YA formats now work correctly
- High-bit-depth semi-planar YUV now works correctly
And expose the parsed values as frame side data. Update FATE results to
match.
It's worth documenting that this relies on the dovi configuration record
being present on the first AVPacket fed to the decoder, which in
practice is the case if if the API user has called something like
av_format_inject_global_side_data, which is unfortunately not the
default.
This commit is not the time and place to change that behavior, though.
Signed-off-by: Niklas Haas <git@haasn.dev>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
To avoid the ref for this growing to a very large size when attaching
the parsed RPU side data. Since this sample does not have any dynamic
metadata, two frames will serve just as well as 100.
Signed-off-by: Niklas Haas <git@haasn.dev>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The values for the essence element type were updated in the spec
from 0x05/0x06 (ST2019-4 2008) to 0x0C/0x0D (ST2019-4 2009).
Fixes ticket #6380.
Thanks-to: Philip de Nier <philip.denier@bbc.co.uk>
Thanks-to: Matthieu Bouron <matthieu.bouron@gmail.com>
Reviewed-by: Matthieu Bouron <matthieu.bouron@gmail.com>
Reviewed-by: Tomas Härdin <tjoppen@acc.umu.se>
Signed-off-by: Nicolas Gaullier <nicolas.gaullier@cji.paris>
Signed-off-by: Marton Balint <cus@passwd.hu>
Adds support for concat demuxer to copy the side data information
from the input file to the resulting file. It will behave like the
metadata copy, where the metadata of the first file is kept in the
the output file.
Extract the current code that already performs the stream side_data
copy into a separate method and reuse the method in the concat demuxer.
Signed-off-by: Gerard Sole <g.sole.ca@gmail.com>
They test libavfilter internal API, so they should be libavfilter
test programs (which implies: linked statically to libavfilter
to access internal APIs and linked normally (statically or dynamically
depending upon the build configuration) against all the other libs).
Right now, they are always linked statically against all libs,
which is a significant size waste compared to shared libs as all
of libavcodec has been pulled in despite not being really used.
This also leads to linking failures on systems for which av_export_avutil
is intended: libavcodec does not expect to be linked statically
against the library providing avpriv_(cga|vga16)_font in this case.
This is fixed by this commit.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Fixes so that fate under 64 bit Windows passes.
These functions replace all ff_hscale8to15_*_ssse3 when avx2 is available.
Signed-off-by: James Almer <jamrial@gmail.com>
LSX and LASX is loongarch SIMD extention.
They are enabled by default if compiler support it, and can be disabled
with '--disable-lsx' '--disable-lasx'.
Change-Id: Ie2608ea61dbd9b7fffadbf0ec2348bad6c124476
Reviewed-by: Shiyou Yin <yinshiyou-hf@loongson.cn>
Reviewed-by: guxiwei <guxiwei-hf@loongson.cn>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes FATE failures if e.g. libavdevice is disabled.
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The mpeg4 encoder is slice-threaded and its output depends upon
the number of threads used. Therefore all tests of this encoder
use a hardcoded number of threads (ENC_OPTS in fate-run.sh contains
"-threads 1"; only the vsynth%-mpeg4-thread tests override this
for the mpeg4 encoder, but they also use a hardcoded value to
be consistent across different systems); only the new shortest
and copy-shortest[12] (implicitly due to the sample used) tests
don't and this leads to FATE-failures.
Fix this by explicitly setting the thread count.
Also switch the shortest test to framecrc, because hashing side data
is itchy even though the side data used here (AV_PKT_DATA_QUALITY_STATS)
has a defined endianness.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Currently, the code doing this is spread over several places and may
behave in unexpected ways. E.g. automatic 'default' marking is only done
for streams fed by complex filtergraphs. It is also applied in the order
in which the output streams are initialized, which is effectively
random.
Move processing the dispositions at the end of open_output_file(), when
we already have all the necessary information.
Apply the automatic default marking only if no explicit -disposition
options were supplied by the user, and apply it to the first stream of
each type (excluding attached pics) when there is more than one stream
of that type and no default markings were copied from the input streams.
Explicitly document the new behavior.
Changes the results of some tests, where the output file gets a default
disposition, while it previously did not.
Also covers muxing and demuxing of nonstandard FLAC channel layouts
and the multi-dim-quant option of the FLAC encoder
(all of which was hitherto uncovered).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This allows nicer tests by having a greater range of inputs available
(without requiring adding further samples to the fate-suite).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Provides coverage for the muxer.
(Thanks to tresh for modifying the whitespace commit hook
to allow to push this ref file with tabs.)
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It uses the test-lrc.lrc sample which was added years ago, but never
used until now.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This information is coded in a standard MP4 KindBox and utilizes the
scheme and values as per the DASH role scheme defined in MPEG-DASH.
Other schemes are technically allowed, but where multiple schemes
define the same concepts, the DASH scheme should be utilized.
Such flagging is additionally utilized by the DASH-IF CMAF ingest
specification, enabling an encoder to inform the following component
of the roles of the incoming media streams.
A test is added for this functionality in a similar manner to the
matroska test.
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
This patch increases several stack buffers in order to fix
stack-buffer-overflows (e.g. in put_hevc_qpel_uni_hv_9 in
line 814 of hevcdsp_template.c) detected with ASAN in the hevc_pel
checkasm test.
The buffers are increased by the minimal amount necessary
in order not to mask potential future bugs.
Reviewed-by: Martin Storsjö <martin@martin.st>
Reviewed-by: "zhilizhao(赵志立)" <quinkblack@foxmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
They already uncovered an uninitialized-value bug in the ATRAC3 code
in the demuxer; and provide coverage for ID3v2.3.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The new format (given in big/little endian forms) matches the
existing X2RGB10 format, except with B and R channels switched.
AV_PIX_FMT_X2BGR10 data often is created by OpenGL programs
whose buffers use the GL_RGB10 internal format.
Signed-off-by: Manuel Stoeckl <code@mstoeckl.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This resolves a problem where conversions from YUV to X2RGB10LE
would produce color values a factor 4 too small, because an 8-bit
value was placed in a 10-bit channel.
Signed-off-by: Manuel Stoeckl <code@mstoeckl.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The current name comes from a time in which libavcodec/utils.c
contained the whole core of libavcodec.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
When a color indexing transform with 16 or fewer colors is used,
WebP uses "pixel packing", i.e. storing several pixels in one byte,
which virtually reduces the width of the image (see WebPContext's
reduced_width field). This reduced_width should always be used when
reading and applying subsequent transforms.
Updated patch with added fate test.
The source image dual_transform.webp can be downloaded by cloning
https://chromium.googlesource.com/webm/libwebp-test-data/
Fixes: 9368
Signed-off-by: James Zern <jzern@google.com>
This muxer was untested up until now; had it been tested, it would
have been obvious that it has been broken for years.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
And remove the unnecessary ffmpeg dependencies while at it.
Reviewed-by: Soft Works <softworkz@hotmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Fixes trac issue #7473.
Removes encoder delay (skip samples) and writes remaining frame samples after EOF to get correct sample count.
Output is now accurate vs players that use Microsoft's codecs (Windows Media Format Runtime).
Tested vs encode>decode WMAv2 with MS's codecs and most sample rate/bit rate/channel/mode combinations in ASF/XWMA.
WMAv1 appears to use the same delay, from FFmpeg samples.
Signed-off-by: bnnm <bananaman255@gmail.com>
subtitles.mak's fate-sub tests utilize a more strict comparator
("rawdiff"), which causes the tests fail in case of white space
differences, such as CRLF vs LF. This in turn causes these
ffprobe-using TTML-in-MP4 tests to fail on non-LF systems such as
Windows or wine.
Includes basic support for both the ISMV ('dfxp') and MP4 ('stpp')
methods. This initial version also foregoes fragmentation support
in case the built-in sample squashing is to be utilized, as this
eases the initial review.
Additionally, add basic tests for both muxing modes in MP4.
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
Up until now, the Matroska muxer did not use the dispositions it is
given as-is; instead it by default overrode the disposition of the first
track of a kind (audio, video, subtitles) if no track of this kind has
the default disposition set. And up until recently, it also enforced
by default that no more than one track of each kind be marked as
default.
The rationale for the former is that there are lots of containers which
lack the concept of default streams, so that it is not uncommon for no
stream to be marked as default at all; the rationale for the latter was
that up until recently, it was dubious whether the Matroska specification
allowed more than one default stream for track type (e.g. mkvmerge
disallowed it). It was this point which led to the implementation of
the above mentioned behaviour inspired by mkvmerge.
Yet the Matroska specifications have changed and now explicitly allow
to set more than one track of each type as default, so that the main
reason of not using the dispositions as-is was rendered moot. Therefore
this commit changes the default to pass the disposition through.
The matroska-mpegts-remux FATE-test has been updated to still use the
old "infer" mode so that it is still covered by FATE; the
matroska-zero-length-block test has also been updated to cover
the infer_no_subs mode. The references for lots of other FATE tests
needed to be updated because of a newly added FlagDefault element with
value zero (whereas a FlagDefault with value 1 needn't be coded at all,
as it coincided with the default value of said element).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The Matroska specifications have evolved and now allow to mark
multiple tracks of the same kind as default (whether this was legal or
not before was dubious; e.g. mkvmerge disallowed it). Yet when the
Matroska muxer is set to infer default dispositions if absent, it also
enforced the now outdated restriction. So update this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The MOV muxer can store streamids as track ids but they aren't
visible when probing the result via lavf/dump or ffprobe due to
lack of this flag in the demuxer.
Also adapt some FATE tests to already cover this.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
677a030b26 introduced more printable
side data types in ffprobe, however the Audio Service Type side data
'type' field that was introduced aliases an existing field of the same
name within the side data array, which can lead to JSON output like:
"side_data_list": [
{
"side_data_type": "Audio Service Type",
"type": 0
},
{
"side_data_type": "Stereo 3D",
"type": "side by side",
"inverted": 1
}
]
This, while technically valid JSON, is considered bad practice, since it
forces all downstream users to manually parse it and check all types;
it makes simple deserialization impossible. Worse, in som loosely
type languages, it can lead to silent bugs if exising code assumed
it was a different type.
As such, rename this second "type" field to "service_type".
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Adds schema validation for ffprobe XML output so that updating the
ffprobe.xsd file upon changes to ffprobe is not forgotten. This was
suggested by Marton Balint in:
http://ffmpeg.org/pipermail/ffmpeg-devel/2021-March/278428.html
The schema FATE test is only run if xmllint command is available.
Signed-off-by: Tobias Rapp <t.rapp@noa-archive.com>
After fixing AV_PKT_DATA_SKIP_SAMPLES for reading vorbis packets from ogg,
the actual decoded samples become fewer. Three fate tests are failing:
fate-vorbis-20:
The samples in 6.ogg are not frame aligned. 6.pcm file was generated by
ffmpeg before the fix. After the fix, the decoded pcm file does not match
anymore. Ideally the ref file 6.pcm should be updated but it is probably
not worth it including another copy of the same file, only smaller.
SIZE_TOLERANCE is added for this test case.
fate-webm-dash-chapters:
The original vorbis_chapter_extension_demo.ogg is transmuxed to dash-webm.
The ref file webm-dash-chapters needs to be updated.
fate-vorbis-encode:
This exposes another bug in the vorbis encoder that initial_padding is not
correctly set. It is fixed in the previous patch.
Signed-off-by: Guangyu Sun <gsun@roblox.com>
Call the scaler function directly rather than through a function
pointer. Drop the now-unused return value from ff_getSwsFunc() and
rename the function to reflect its new role.
This will be useful in the following commits, where it will become
important that the amount of output is different for scaled vs unscaled
case.