This check is intended to be avoid buffer overflows,
yet there are four problems with it:
1. It has an in-built off-by-one error: len == out_end - out
is perfectly fine and nothing to worry about.
This off-by-one error led to the pixel in the lower-right corner
not being set properly for the back frame of the sample from
the rl2 FATE-test. This pixel is copied to every frame which
is the reason for the update to the reference file of said test.
With this patch, the output of the decoder matches the output
as captured from the reference decoder* (apart from the fact
that said reference somehow lacks the top part of the frame
(copied over from the background frame)).
2. Given that the stride of the buffer may be different
from the width of the video (despite one pixel taking one byte),
there is a second check lateron making the first check redundant
(if one returns immediately; a simple break at the second check
is not sufficient, because it only exits the inner loop).
3. The check is based around the assumption of the stride being
positive (it has this in common with the other check which
will be fixed in a future commit).
4. Even after fixing the off-by-one error, the check in
question is still triggered by all the non-background frames
in the FATE sample as well as by A1100100.RL2. In all these
cases, they use len == 255 and val == 128. For videos with
background frame this just means "copy from the background
frame", which would be done anyway lateron.* Yet for videos
without it copying it is necessary to avoid leaving
uninitialized parts in the video.
*: Available in https://samples.mplayerhq.hu/game-formats/voyeur-rl2/
**: Due to this, the code that copies the rest from the
back frame is no longer executed for any of the samples
available on the sample server. Given that these are only
the files from the demo version of this game, I don't know
whether this code is executed for any file in existence or not.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
RVV defines a total of 12 different extensions, including:
- 5 different instruction subsets:
- Zve32x: 8-, 16- and 32-bit integers,
- Zve32f: Zve32x plus single precision floats,
- Zve64x: Zve32x plus 64-bit integers,
- Zve64f: Zve32f plus Zve64x,
- Zve64d: Zve64f plus double precision floats.
- 6 different vector lengths:
- Zvl32b (embedded only),
- Zvl64b (embedded only),
- Zvl128b,
- Zvl256b,
- Zvl512b,
- Zvl1024b,
- and the V extension proper: equivalent to Zve64f and Zvl128b.
In total, there are 6 different possible sets of supported instructions
(including the empty set), but for convenience we allocate one bit for
each type sets: up-to-32-bit ints (RVV_I32), floats (RVV_F32),
64-bit ints (RVV_I64) and doubles (RVV_F64).
Whence the vector size is needed, it can be retrieved by reading the
unprivileged read-only vlenb CSR. This should probably be a separate
helper macro if needed at a later point.
This introduces compile-time and run-time CPU detection on RISC-V. In
practice, I doubt that FFmpeg will ever see a RISC-V CPU without all of
I, F and D extensions, and if it does, it probably won't have run-time
detection. So the flags are essentially always set.
But as things stand, checkasm wants them that way. Compare the ARMV8
flag on AArch64. We are nowhere near running short on CPU flag bits.
This also tests writing slice data in the unaligned mode
(some of these files use CAVLC) as well as updating
side data as well as parsing ISOBMFF avcc extradata.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
~4x faster than the C version.
The shuffles in the 15pt dim1 are seriously expensive. Not happy with it,
but I'm contempt.
Can be easily converted to pure AVX by removing all vpermpd/vpermps
instructions.
Old one was written with the assumption only even inputs would be given.
This very messy replacement supports even and odd inputs, and supports
AVX2 for extra speed. The buffers given are usually quite big (4k samples),
so the speedup is worth it.
The new SSE version is still faster than the old inline asm version by 33%.
Also checkasm is provided to make sure this monstrosity works.
This fixes some FATE tests.
The aim of this test is to show the interleavement
of the file generated in the first pass; so make the
interleavement queue in the framecrc muxer in the second
pass as small as possible so that the framecrc muxer does not
fix wrong interleavement of the input file behind our backs.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
enc_dec is designed for raw input and output and computes
the PSNR between these two. The input of the shortest-sub
test is the idx file of a vobsub sub+idx combination
and the output is the output of framecrc of said vobsub
subtitle muxed into Matroska together with a synthesized
video. Calculating the PSNR between these two files makes
no sense, therefore switch to a transcode test, where
the ref file file contains the output of framecrc directly,
making the interleavement better visible in the ref file
at the cost of a larger ref file (>400 lines).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also covers writing mastering display metadata.
Reviewed-by: Tomas Härdin <tjoppen@acc.umu.se>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Do this by setting AVCodecInternal.pad_samples.
This prevents reading into the frame's padding and writing
into the packet's padding.
This actually happened in our FATE tests (where the number of samples
is 2 mod 4), which therefore needed to be updated.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
APTX decodes four bytes of input to four stereo samples; APTX HD
does the same with six bytes of input. So it can be easily supported
in av_get_audio_frame_duration().
This fixes invalid durations and (derived) timestamps of demuxed
APTX HD packets and therefore fixed the timestamp in the aptx-hd
FATE test.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
We have de- and encoders for APTX and APTX HD, yet not FATE tests.
This commit therefore adds a transcoding test to utilize them.
Furthermore, during creating these tests it turned out that
the duration is set incorrectly for APTX HD. This will be fixed
in a future commit.
(Thanks to Andriy Gelman for finding an issue in an earlier version
that used a 192kHz input sample which does not work reliably accross
platforms.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Since introducing the various packed formats used by VAAPI (and p012),
we've noticed that there's actually a gap in how
av_find_best_pix_fmt_of_2 works. It doesn't actually assign any value
to having the same bit depth as the source format, when comparing
against formats with a higher bit depth. This usually doesn't matter,
because av_get_padded_bits_per_pixel() will account for it.
However, as many of these formats use padding internally, we find that
av_get_padded_bits_per_pixel() actually returns the same value for the
10 bit, 12 bit, 16 bit flavours, etc. In these tied situations, we end
up just picking the first of the two provided formats, even if the
second one should be preferred because it matches the actual bit depth.
This bug already existed if you tried to compare yuv420p10 against p016
and p010, for example, but it simply hadn't come up before so we never
noticed.
But now, we actually got a situation in the VAAPI VP9 decoder where it
offers both p010 and p012 because Profile 3 could be either depth and
ends up picking p012 for 10 bit content due to the ordering of the
testing.
In addition, in the process of testing the fix, I realised we have the
same gap when it comes to chroma subsampling - we do not favour a
format that has exactly the same subsampling vs one with less
subsampling when all else is equal.
To fix this, I'm introducing a small score penalty if the bit depth or
subsampling doesn't exactly match the source format. This will break
the tie in favour of the format with the exact match, but not offset
any of the other scoring penalties we already have.
I have added a set of tests around these formats which will fail
without this fix.
These tests test both the demuxer as well as the muxer
wherever possible. It is not always possible due to the fact
that the muxer supports more codecs than the demuxer.
The spdif demuxer does currently not set the need_parsing flag.
If one were to set this to AVSTREAM_PARSE_FULL, the test results
would change as follows:
- For spdif-aac-remux, the packets are currently padded to 16bits,
i.e. if the actual packet size is odd, there is a padding byte.
The parser splits this byte away into a one byte packet of its own.
Insanely, these one byte packets get the same duration as normal
packets, i.e. timing is ruined.
- The DCA-remux tests get proper duration/timestamps.
- In the spdif-mp2-remux test the demuxer marks the stream as
being MP2; the parser sets it to MP3 and this triggers
the "Codec change in IEC 61937" codepath; this test therefore
returns only two packets with the parser.
- For spdif-mp3-remux some bytes end up in different packets:
Some input packets of this file have an odd length (417B instead
of 418B like all the other packets) and are padded to 418B.
Without a parser, all returned packets from the spdif-demuxer
are 418B. With a parser, the packets that were originally 417B
are 417B again, but the padding byte has not been discarded,
but added to the next packet which is now 419B.
This fixes "Multiple frames in a packet" warning and avoids
an "Invalid data found when processing input" error when decoding.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This duration is equal to the longest duration in all track's tkhd atoms, which
may be comprised of the sum of all edit lists in each track. Empty edit lists
in tracks represent start_time, and the actual media duration is stored in the
mdhd atom.
This change lets the generic demux code derive the longest track duration taken
from mdhd atoms, so the correct duration and start_time combination will be
reported.
Should fix ticket #9775.
Reviewed-by: zhilizhao(赵志立) <quinkblack@foxmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
The field is not specific to Opus.
The mp2fixed encoder signals initial_padding and is used
by both the matroska-encoding-delay test as well as
the lavf-mkv tests which necessitated several FATE ref changes.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Matroska generally requires timestamps to be nonnegative, but
there is an exception: Data that corresponds to encoder delay
and is not supposed to be output anyway can have a negative
timestamp. This is achieved by using the CodecDelay header
field: The demuxer has to subtract this value from the raw
(nonnegative) timestamps of the corresponding track.
Therefore the muxer has to add this value first to write
this raw timestamp.
Support for writing CodecDelay has been added in FFmpeg commit
d92b1b1bab and in Libav commit
a1aa37dd0b. The former simply
wrote the header field and did not apply any timestamp offsets,
leading to desynchronisation (if one uses multiple tracks).
The latter applied it at two places, but not at the one where
it actually matters, namely in mkv_write_block(), leading to
the same desynchronisation as with the former commit. It furthermore
used the wrong stream timebase to convert the delay to the
stream's timebase, as the conversion used the timebase from
before avpriv_set_pts_info().
When the latter was merged in 82e4f39883,
it was only done in a deactivated state that still did not
offset the timestamps when muxing due to "assertion failures
and av sync errors". a1aa37dd0b
made it definitely more likely to run into assertion failures
(namely if the relative block timestamp doesn't fit into an int16_t).
Yet all of the above issues have been fixed (in commits
962d631573,
5d3953a5dc and
4ebeab15b0. This commit therefore
enables applying CodecDelay, fixing ticket #7182.
There is just one slight regression from this: If one has input
with encoder delay where the first timestamp is negative, but
the pts of the part of the data that is actually intended to be
output is nonnegative, then the timestamps will currently by default
be shifted to make them nonnegative before they reach the muxer;
the muxer will then ensure that the shifted timestamps are retained.
Before this commit, the muxer did not ensure this; instead the
timestamps that the demuxer will output were shifted and
if the first timestamp of the actually intended output was zero
before shifting, then this unintentional shift just cancels
the shift performed before the packet reached the muxer.
(But notice that this only applies if all the tracks use the same
CodecDelay, or the relative sync between tracks will be impaired.)
This happens in the matroska-opus-remux and matroska-ogg-opus-remux
FATE tests. Future commits will forward the information that
the Matroska muxer has a limited capability to handle negative
timestamps so that the shifting in libavformat can take advantage
of it.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It is possible for the trailing padding to be zero, namely
e.g. if the AV_PKT_DATA_SKIP_SAMPLES side data is used
for leading padding. Matroska supports this (use a negative
DiscardPadding), but players do not; at least Firefox refuses
to play such a file. So for now only write DiscardPadding
if it is trailing padding and nonzero.
The fate-matroska-ogg-opus-remux was affected by this.
(I wish CodecDelay would not exist and DiscardPadding would
be used to instead trim the codec delay away (with the Block
timestamp corresponding to the time at which the actually
output audio is output).)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The function contains only two assignments, setting DVVideoContext.avctx
and AVCodecContext.chroma_sample_location. However, the decoder does not
use the former, and the encoder should not be setting the latter.
Therefore move the first assignment to dvenc and the second to dvdec.
Make the encoder warn if the user-signalled chroma sample location does
not match the supported one, and return an error on higher compliance
levels.
These are the formats we want/need to use when dealing with the Intel
VAAPI decoder for 12bit 4:2:0, 12bit 4:2:2, 10bit 4:4:4 and 12bit 4:4:4
respectively.
As with the already supported Y210 and YUVX (XVUY) formats, they are
based on formats Microsoft picked as their preferred 4:2:2 and 4:4:4
video formats, and Intel ran with it.
P12 and Y212 are simply an extension of 10 bit formats to say 12 bits
will be used, with 4 unused bits instead of 6.
XV30, and XV36, as exotic as they sound, are variants of Y410 and Y412
where the alpha channel is left formally undefined. We prefer these
over the alpha versions because the hardware cannot actually do
anything with the alpha channel and respecting it is just overhead.
Y412/XV46 is a normal looking packed 4 channel format where each
channel is 16bits wide but only the 12msb are used (like P012).
Y410/XV30 packs three 10bit channels in 32bits with 2bits of alpha,
like A/X2RGB10 style formats. This annoying layout forced me to define
the BE version as a bitstream format. It seems like our pixdesc
infrastructure can handle the LE version being byte-defined, but not
when it's reversed. If there's a better way to handle this, please
let me know. Our existing X2 formats all have the 2 bits at the MSB
end, but this format places them at the LSB end and that seems to be
the root of the problem.
As we already have support for VUYA, I figured I should do the small
amount of work to support VUYX as well. That means a little refactoring
to share code.
This is the alphaless version of VUYA that I introduced recently. After
further discussion and noting that the Intel vaapi driver explicitly
lists XYUV as a support format for encoding and decoding 8bit 444
content, we decided to switch our usage and avoid the overhead of
having a declared alpha channel around.
Note that I am not removing VUYA, as this turned out to have another
use, which was to replace the need for v408enc/dec when dealing with
the format.
The vaapi switching will happen in the next change
IEEE-754 differentiates two different kind of NaNs.
Quiet and Signaling ones. They are differentiated by the MSB of the
mantissa.
For whatever reason, actual hardware conversion of half to single always
sets the signaling bit to 1 if the mantissa is != 0, and to 0 if it's 0.
So our code has to follow suite or fate-testing hardware float16 will be
impossible.
This avoids triggering overflows in the filters, and avoids stray
test failures in the approximate functions on x86; due to rounding
differences, one implementation might overflow while another one
doesn't.
Signed-off-by: Martin Storsjö <martin@martin.st>
It is done later in ff_mpv_frame_start() (and nobody uses
current_picture_ptr between setting it in ff_mpv_frame_start()).
(The reason the vsynth*-h263-obmc ref files change is because
the call to ff_find_unused_picture() now happens after the older
pictures have been unreferenced in ff_mpv_frame_start(),
so that their slots in the picture array can be immediately
reused; the obmc code is somehow buggy and changes its output
depending on the earlier contents of the motion_val buffer.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Previously, the checkasm test always passed h=8, so no other cases
were tested.
Out of the me_cmp functions, in practice, some functions are hardcoded
to always assume a 8x8 block (ignoring the h parameter), while others
do use the parameter. For those with hardcoded height, both the
reference C function and the assembly implementations ignore the
parameter similarly.
The documentation for the functions indicate that heights between
w/2 and 2*w, within the range of 4 to 16, should be supported. This
patch just tests random heights in that range, without knowing what
width the current function actually uses.
Signed-off-by: Martin Storsjö <martin@martin.st>
Change the reference to exactly match the C reference in swscale,
instead of exactly matching the x86 SIMD implementations (which
differs slightly). Test with and without SWS_ACCURATE_RND - if this
flag isn't set, the output must match the C reference exactly,
otherwise it is allowed to be off by 2.
Mark a couple x86 functions as unavailable when SWS_ACCURATE_RND
is set - apparently this discrepancy hasn't been noticed in other
exact tests before.
Add a test for yuv2plane1.
Signed-off-by: Jonathan Swinney <jswinney@amazon.com>
Signed-off-by: Martin Storsjö <martin@martin.st>
Up until now, ff_wmv2_decode_secondary_picture_header() only
set the mb_type array for non I-pictures, so that the decoding
process uses the earlier values of this array; this affects
the output of the wmv8-x8intra FATE-test (which this patch
therefore updates). These earlier values were set when decoding
earlier frames or when the buffer was initially zero-allocated.
A consequence of this is that the output of this test would be
random if ff_find_unused_picture() would select the unused picture
to return at random. Furthermore decoding from a keyframe onwards
depends upon the earlier state of the decoder.
This patch therefore zeroes said array when decoding an I picture.
(It is not claimed that zero is the right value to fill the array with.
I just don't know.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
These tables are only used by encoders and only for the current picture;
ergo they need not be put into the picture at all, but rather into
the encoder's context. They also don't need to be refcounted,
because there is only one owner.
In contrast to this, the earlier code refcounts them which
incurs unnecessary overhead. These references are not unreferenced
in ff_mpeg_unref_picture() (they are kept in order to have something
like a buffer pool), so that several buffers are kept at the same
time, although only one is needed, thereby wasting memory.
The code also propagates references to other pictures not part of
the pictures array (namely the copy of the current/next/last picture
in the MpegEncContext which get references of their own). These
references are not unreferenced in ff_mpeg_unref_picture() (the
buffers are probably kept in order to have something like a pool),
yet if the current picture is a B-frame, it gets unreferenced
at the end of ff_mpv_encode_picture() and its slot in the picture
array will therefore be reused the next time; but the copy of the
current picture also still has its references and therefore
these buffers will be made duplicated in order to make them writable
in the next call to ff_mpv_encode_picture(). This is of course
unnecessary.
Finally, ff_find_unused_picture() is supposed to just return
any unused picture and the code is supposed to work with it;
yet for the vsynth*-mpeg4-adap tests the result depends upon
the content of these buffers; given that this patchset
changes the content of these buffers (the initial content is now
the state of these buffers after encoding the last frame;
before this patch the buffers used came from the last picture
that occupied the same slot in the picture array) their ref-files
needed to be changed. This points to a bug somewhere (if one removes
the initialization, one gets uninitialized reads in
adaptive_quantization in ratecontrol.c).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This codepath is enabled by default on arm, if the linux perf API
is available, unless disabled with --disable-linux-perf.
Signed-off-by: Martin Storsjö <martin@martin.st>
The "AYUV" format is defined by Microsoft as their preferred format for
4:4:4 content, and so it is the format used by Intel VAAPI and QSV.
As Microsoft like to define their byte ordering in little-endian
fashion, the memory order is reversed, and so our pix_fmt, which
follows memory order, has a reversed name (VUYA).
Firstly, the timestamps generated from framerate are inaccurate for
variable framerate mode.
Secondly, the timestamps always start from zero, while pts/dts can
start from nonzero. FLV demuxer rejects such index with message:
"Found invalid index entries, clearing the index".
The generated files are endian-dependent, so no checksums
may be part of the ref files.
Fixes ticket #9854.
Tested-by: Sebastian Ramacher <sramacher@debian.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The output file (even the filesize) of the recently added
EXR tests depends on the endianness; therefore checksums
of these files must not be part of the ref file. Therefore
this commit adds an option (unused for now) to disable these
checksums on a per-test basis.
In order to avoid having to check twice, the checksum and
the filesize info are moved to immediately follow one another;
this results into updates to the ref files of all lavf-image tests.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Necessitated by 6ca43a9675
and 425b309fa4.
Reviewed-by: Martin Storsjö <martin@martin.st>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This tests the new "-flags2 icc_profiles" option by making sure the
embedded ICC profile gets correctly detected as sRGB.
Signed-off-by: Niklas Haas <git@haasn.dev>
It conflicts with the name of the test using the testtool
in libavformat.mak.
Fixes ticket #9841.
Reviewed-by: Pierre-Anthony Lemieux <pal@sandflow.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Same issues apply to it as to -shortest.
Changes the results of the following tests:
- matroska-flac-extradata-update
The test reencodes two input FLAC streams into three output FLAC
streams. The last output stream is limited to 8 frames. The current
code results in the first two output streams having 12 frames, after
this commit all three streams have 8 frames and are the same length.
This new result is better, since it is predictable.
- mkv-1242
The test streamcopies one video and one audio stream, video is limited
to 11 frames. The new result shortens the audio stream so that it is
not longer than the video.
The -shortest option (which finishes the output file at the time the
shortest stream ends) is currently implemented by faking the -t option
when an output stream ends. This approach is fragile, since it depends
on the frames/packets being processed in a specific order. E.g. there
are currently some situations in which the output file length will
depend unpredictably on unrelated factors like encoder delay. More
importantly, the present work aiming at splitting various ffmpeg
components into different threads will make this approach completely
unworkable, since the frames/packets will arrive in effectively random
order.
This commit introduces a "sync queue", which is essentially a collection
of FIFOs, one per stream. Frames/packets are submitted to these FIFOs
and are then released for further processing (encoding or muxing) when
it is ensured that the frame in question will not cause its stream to
get ahead of the other streams (the logic is similar to libavformat's
interleaving queue).
These sync queues are then used for encoding and/or muxing when the
-shortest option is specified.
A new option – -shortest_buf_duration – controls the maximum number of
queued packets, to avoid runaway memory usage.
This commit changes the results of the following tests:
- copy-shortest[12]: the last audio frame is now gone. This is
correct, since it actually outlasts the last video frame.
- shortest-sub: the video packets following the last subtitle packet are
now gone. This is also correct.
The amount of padding samples reported by containers take into account the
extended samplerate in HE-AAC.
Fixes ticket #9671.
Signed-off-by: James Almer <jamrial@gmail.com>
wrapped_avframe_decode() uses an AVFrame as dst in av_frame_move_ref()
after having called ff_decode_frame_props() to attach side-date
to this very frame. This leaks all the side-data and metadata
that ff_decode_frame_props() has attached.
This happens in various fate-filter-metadata tests since
6ca43a9675.
These particular leaks (which affect metadata-only)
could be fixed by not adding metadata side-data to AVPackets
in libavdevice if they are also available from the AVFrames.
Yet this would break users that extract the metadata from
AVPackets.
The changes to FATE happen because of the way av_dict_set()
works when it overwrites an already existing entry:
It overwrites the entry to be overwritten with the last entry
and adds the new entry at the end. The end result is that
the first entry of the dict is the second-to-last-entry of
the original dict, the last entry of the dict is the last
entry of the old dict and the first count - 2 entries
of the original dict are at positions 1..count - 2 in their
original order.
Reviewed-by: Timo Rothenpieler <timo@rothenpieler.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This avoids an extra copy of potentially quite big video frames.
Instead of copying the entire frames data into a rawvideo packet it
packs the frame into a wrapped avframe packet and passes it through
as-is.
Unfortunately, wrapped avframes are set up to be video frames, so the
audio frames continue to be copied.
Additionally, this enabled passing through video frames that previously
were impossible to process, like hardware frames or other special
formats that couldn't be packed into a rawvideo packet.
For 422 frames we should not use hard coded 8 to calculate mb size for
uv plane. Chroma shift should be taken into consideration to be
compatiple with different sampling format.
The error is reported by fate test when av_cpu_max_align() return 64
on the platform supporting AVX512. This is a hidden error and it is
exposed after commit 17a59a634c.
mpeg2enc has a mechanism to reuse frames. When it computes SSE (sum of
squared error) on current mb, reconstructed mb will be wrote to the
previous mb space, so that the memory can be saved. However if the align
is 64, the frame is shared in somewhere else, so the frame cannot be
reused and a new frame to store reconstrued data is created. Because the
height of mb is wrong when compute sse on 422 frame, starting from the
second line of macro block, changed data is read when frame is reused
(we need to read row 16 rather than row 8 if frame is 422), and unchanged
data is read when frame is not reused (a new frame is created so the
original frame will not be changed).
That is why commit 17a59a634c exposes this
issue, because it add av_cpu_max_align() and this function return 64 on
platform supporting AVX512 which lead to creating a frame in mpeg2enc,
and this lead to the different outputs.
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
Some samples contain Active Format Descriptors, yet the output
of no test depends upon them, so that they are de-facto untested.
So add a dedicated test for them.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Update the still AVIF parser to only read the primary item. With this
patch, AVIF still images with exif/icc/alpha channel will no longer
fail to parse.
For example, this patch enables parsing of files in:
https://github.com/AOMediaCodec/av1-avif/tree/master/testFiles/Microsoft
Adding two fate tests:
1) demuxing of still image with 1 item - this test will pass regardless
of this patch.
2) demuxing of still image with 2 items - this test will fail without
this patch and will pass with patch applied.
Partially fixes trac ticket #7621
Signed-off-by: Vignesh Venkatasubramanian <vigneshv@google.com>
Signed-off-by: James Zern <jzern@google.com>
- ff_pix_abs16_neon
- ff_pix_abs16_xy2_neon
In direct micro benchmarks of these ff functions verses their C implementations,
these functions performed as follows on AWS Graviton 3.
ff_pix_abs16_neon:
pix_abs_0_0_c: 141.1
pix_abs_0_0_neon: 19.6
ff_pix_abs16_xy2_neon:
pix_abs_0_3_c: 269.1
pix_abs_0_3_neon: 39.3
Tested with:
./tests/checkasm/checkasm --test=motion --bench --disable-linux-perf
Signed-off-by: Jonathan Swinney <jswinney@amazon.com>
Signed-off-by: Martin Storsjö <martin@martin.st>
Up until now, updating extradata was very ad-hoc: The amount of
space reserved for extradata was not recorded when writing the
header; instead the AAC code simply presumed that it was enough.
This commit changes this by recording how much space is available.
This brings with it that the code for writing of and reserving space
for the CodecPrivate and for updating it diverges. They are therefore
split; this allows to put other common tasks like seeking to
right offset as well as writing padding (in case the new extradata did
not fill the whole reserved space) to this common function.
The code for filling up the reserved space is smarter than the code
it replaces; therefore it is no longer necessary to reserve more
than necessary just to be sure that one can add an EBML Void element
(whose minimum size is two) lateron. This is the reason for the change
to the aac-autobsf-adtstoasc test.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This is possible by using a dynamic buffer to write them;
said dynamic buffer is (re)used and reset as appropriate.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
SSE3 instruction movdqa in ff_yuv2yuvX_sse3() expects a 16-byte aligned address for a memory address, or else a segfault is generated.
The src_pixels buffer below was not aligned to 16 bytes on the stack necessarily, so we got segfaults during fate-checkasm-sw_scale.
Therefore 16-byte align all of these local variables, aligning them too much shouldn't hurt.
- Introduce ff_draw_init2, which takes explicit colorspace and range
args
- Use lavu/csp and lavfi/colorspace for conversion, rather than the
lavu/colorspace.h macros
- Use the passed-in colorspace when performing RGB->YUV conversions
The upshot of this is:
- Support for YUV spaces other than BT601
- Better rounding for all conversions
- Particular rounding improvements in >8-bit formats, which previously
used simple left-shifts
- Support for limited-range RGB
- Support for full-range YUV in non-J pixfmts
Due to the rounding improvements, this results in a large number of
minor changes to FATE tests.
Signed-off-by: rcombs <rcombs@rcombs.me>
Fixes CID1396405
MSE and PSNR is slightly improved, and some noticable corruptions disappear as
well.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Marton Balint <cus@passwd.hu>
The cue_sheet.wv sample contains a cue sheet as APE tags,
yet this is not really covered by fate-wavpack-cuesheet
because the metadata does not affect the output of said test.
So add a proper test for this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Use the md5 protocol instead of creating a file just to calculate
its MD5 checksum. This is possible because there are no output seeks
involved in any of these tests.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
log2() remains, this can either be replaced by a integer implementation or the table
hardcoded if needed
Tested-by: Anton Khirnov <anton@khirnov.net>
Tested-by: Martin Storsjö <martin@martin.st>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The HEVC decoder can call these functions with smaller widths than the
functions themselves are designed to operate on so we should only check
the relevant output
Signed-off-by: J. Dekker <jdek@itanimul.li>
enc_dec() performs two ffmpeg runs - the first one encoding a source
file into a specified output format, the second one decoding previously
encoded file.
The arguments to this function currently have confusing names - e.g.
dec_opt contains _output_ (i.e. encoding) options for the second
(decoding) ffmpeg invocation. It is also possible to supply _input_
(i.e. decoding) options for the second ffmpeg run, but the argument
is currently unnamed and referred to by number.
Add an _in/_out suffix to argument names to make it clear what they are
used for. Give a name to input options for the decoding ffmpeg run.
Since every DLL can use an individual CRT on Windows, having
an exported function that opens a FILE* won't work if that
FILE* is going to be used from a different DLL (or from user
application code).
Internally within the libraries, the issue can be worked around
by duplicating the function in all libraries (this already happened
implicitly because the function resided in file_open.c) and renaming
the function to ff_fopen_utf8 (so that it doesn't end up exported from
the DLLs) and duplicating it in all libraries that use it.
This makes the avpriv_fopen_utf8 / ff_fopen_utf8 function work in
the exact same way as the existing avpriv_open / ff_open, with the
same setup as introduced in e743e7ae6e.
That mechanism doesn't work for external users, thus deprecate the
existing function.
Signed-off-by: Martin Storsjö <martin@martin.st>
Do this by making this test a transcode test.
Also fix the test requirements and don't add this test to FATE_AFILTER;
instead use a new variable and a new target for flvenc-tests.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also add a fate-filter-overlays target containing all these tests
and fix the requirements of the tests; furthermore, remove
unnecessary scale filters from filter-overlay-rgba?_rgba.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also fix the requirements of these tests: Only the anaglyph
tests need a scale filter, yet it has been inserted for all tests
without any check for its presence.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Lots of tests use the framecrc command together with some filters,
so adding a special function for it seems worthwhile. This commit
adds one new one and modifies an already existing one:
All users of FILTERDEMDEC already use framecrc and the more general
FILTERDEMDECENCMUX can be used in scenarios where more control over
the used encoders/muxers is needed, so use this in cases where
an actual input file is involved.
Furthermore, add FILTERFRAMECRC for the cases where no demuxing/decoding
occurs, because the input is generated via lavfi.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It is unused and given that one needs an encoder to produce
packets from AVFrames (as output by filters) this is likely
to remain so, because FILTERDEMDECENCMUX is better for these
scenarios.
The only case where one can use filters without encoders is
with the lavfi input device: It outputs AVPackets which could
be copied without another conversion to AVFrames. Yet the variable
to check for this is CONFIG_LAVFI_INDEV, but FILTERDEMDECMUX
is designed to work with demuxers (i.e. CONFIG_*_DEMUXER).
So there is no usecase for FILTERDEMDECMUX.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
filter-pp and filter-pp7 are the only ones of the filter-pp* tests
that use the file generated by fate-vsynth1-mpeg4-qprd.
Also combine the dependency on this test for all the tests that need it.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The temporary fate-lavf files can easily be removed
if they are not needed as inputs for other tests (mainly
fate-seek-tests). This commit implements this.
The size of the remaining files decreases from 260890083B
to 79481793B.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Extend the ordinary mechanism to signal KEEP for this.
This also allows to remove the keep-parameter from enc_dec,
transcode and stream_remux, so that several empty parameters
'""' could be removed.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
These CRC-only files (the output of the CRC-muxer) are only used once,
so they need not be preserved. Furthermore, errors from ffmpeg (used
for creating the CRC) are no longer ignored with this patch.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The output of this test is just a file containing the positions
of peaks; it is not a wave file and trying to demux it just
returns AVERROR_INVALIDDATA; said error has just been ignored
as the return value from do_avconv_crc is the return value from echo.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavc/v210_enc.o
and also allows to inline ff_v210enc_init() irrespectively of
interposing.
This dependency pulled basically all of libavcodec into checkasm,
in particular all codecs.
This also makes checkasm work when using shared Windows builds:
On Windows, it needs to be known to the compiler whether a data
symbol is external to the library/executable or not; hence the
need for av_export_avutil. checkasm needs access to the internals
of the libraries it tests and is therefore linked statically to all
the libraries. This means that the users of avpriv_cga_font and
avpriv_vga16_font in libavcodec (namely ansi.o, bintext.o, tmv.o)
end up in the same executable as the symbols, although they have
been compiled as if these symbols were external, leading to linker
errors. With this commit said files are discarded by the linker,
bypassing this problem.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavc/v210_dec.o
and also allows to inline ff_v210dec_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_threshold.o
and also allows to inline ff_threshold_init() irrespectively of
interposing.
With this patch checkasm no longer pulls all of lavfi and lavf in.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_nlmeans.o
and also allows to inline ff_nlmeans_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_hflip.o
and also allows to inline ff_hflip_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_gblur.o
and also allows to inline ff_gblur_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This removes a dependency of checkasm on lavfi/vf_blend.o
and also allows to inline ff_blend_init() irrespectively of
interposing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Only the AudioFIRDSPContext and the functions for its initialization
are needed outside of lavfi/af_afir.c.
Also rename the header to af_afirdsp.h to reflect the change.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Accidentally resurrected in fc49f22c3b
and 7711f19eda,
forgotten in 6ebc71847e and
1a6a088f7c or never needed
(filter-aemphasis).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It seems as if it was intended to declare fate-gif-color as prerequisite
of the fate-gifenc% tests. Yet the latter do not need anything from
the former, so this would be unnecessary. Furthermore, given that this
line has no associated recipe, it actually cancels implicit rules for
fate-gifenc% instead of adding a prerequisite.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
These tests have basically nothing to do with VPX (they do not even
require the decoder).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The tests in concatdec.mak reuse files created by tests
from lavf-container. Therefore these tests have the other tests
as prerequisite and mostly duplicate their CONFIG-requirements.
(The mxf_d10 tests did it incorrect as they only required
the MXF muxer.) This duplication is of course bad as usual,
so stop it by using the corresponding variable
that contains the non-lavf-container-tests that are enabled
to filter out all the concat-tests without a corresponding enabled
non-concat test.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
These changes are automatically inherited by the fate-seek-tests
based upon lavf-audio.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The new requirements are also automatically inherited
by the FATE_SEEK_LAVF_VIDEO seek-tests.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This automatically fixes the requirements of the fate-seek-acodec*
tests (e.g. 16 of the 27 such tests are now automatically disabled
if the aresample filter is disabled).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This automatically fixes the requirements of the fate-seek-vsynth*
tests (e.g. 16 of the 49 such tests are now automatically disabled
if the scale filter is disabled).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
If one uses a -s command, a scale filter is inserted
even when doing so is redundant. This patch stops
doing so. This makes the tests that don't need libswscale
actually succeed in case it is disabled (only 315 of 470 tests
need it).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Most of the tests in seek.mak use files created by other tests
as input. Therefore these tests have the other tests as prerequisite
and duplicate their CONFIG-requirements. This duplication is of course
bad as usual, so stop it by using the corresponding variable
that contains the non-seek-tests that are enabled to filter out all
the seek-tests without a corresponding enabled non-seek test.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The output files of the lavf tests are highly regular,
allowing to use rules for the src files instead of a list.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Each of the intermediately generated lena-*.fits files is only used
for exactly one test; so it could be deleted right after the test.
Switching to a transcode test (which is also more natural) achieves
this. It also adds checksums of the intermediate files to the ref-file.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In particular, add the missing dependency on the scale and
aresample filters (and therefore on libswscale resp. libswresample).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In particular, add the missing dependency on the scale filter
(and therefore on libswscale).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Intended for scenarios that currently use DEMDEC, but are missing
the requirements that are implicitly needed by framecrc.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Add a parameter that allows to add additional requirements.
Also add FILE_PROTOCOL to all the auxiliary functions
that use a demuxer.
Also fix the requirements for the fate-mpegts-probe-(latm|program)
tests. They have misused DEMDEC.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In particular, add the missing dependency on the scale filter
(and therefore on libswscale).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also fix the requirements of fate-mov-channel-description:
It needs the pcm_s16le decoder and the mov demuxer.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
And drop the FATE_CAF_REMUX variables which only existed
to avoid having to repeat the common FILE_PROTOCOL PIPE_PROTOCOL
FRAMECRC_MUXER stuff.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It also adds the missing depenencies on the file and pipe protocols
and the framecrc muxer.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Tests using the transcode and stream_remux functions have some common
requirements (namely the file and pipe protocols as well as the framecrc
muxer) and also other commonalities: The create a file and read it
immediately afterwards, so that they typically rely on a corresponding
muxer+demuxer pair which typically shares the same name; for transcode
(if it does not use stream copy) the same is true for encoders and
decoders. This means that using special Makefile-functions instead
of the general ALLYES is worthwhile. This commit adds such functions.
These functions allow to add arbitrary CONFIG-checks on top of the
aforementioned ones in order to satisfy special needs (for e.g. parsers,
filters) that several intended users have.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The test also requires a png decoder, which often can be disabled in
cross building setups, where zlib might be missing.
Signed-off-by: Martin Storsjö <martin@martin.st>
This is mostly straightforward. The major complication is that, as a
result of the 16-bit chunk size limitation, ICC profiles may need to be
split up into multiple chunks.
We also need to make sure to allocate enough extra space in the packet
to fit the ICC profile, so modify both mpegvideo_enc.c and ljpegenc.c to
take into account this extra overhead, failing cleanly if necessary.
Also add a FATE transcode test to ensure that the ICC profile gets
written (and read) correctly. Note that this ICC profile is smaller than
64 kB, so this doesn't test the APP2 chunk re-arranging code at all.
Signed-off-by: Niklas Haas <git@haasn.dev>
We re-use the PNGEncContext.zstream for deflate-related operations.
Other than that, the code is pretty straightforward. Special care needs
to be taken to avoid writing more than 79 characters of the profile
description (the maximum supported).
To write the (dynamically sized) deflate-encoded data, we allocate extra
space in the packet and use that directly as a scratch buffer. Modify
png_write_chunk slightly to allow pre-writing the chunk contents like
this.
Also add a FATE transcode test to ensure that the ICC profile gets
encoded correctly.
Signed-off-by: Niklas Haas <git@haasn.dev>
On empty input the awk script was always successful which caused the
filter-refcmp tests to always succeed.
Also fix the command lines for refcmp_metadata compare function because it
needs auto conversion filters, and update reference of test
filter-refcmp-psnr-rgb because it was missed in
a7fc78c1a6 but was never noticed due to the
original issue...
Signed-off-by: Marton Balint <cus@passwd.hu>
Calculate Spatial Info (SI) and Temporal Info (TI) scores for a video, as defined
in ITU-T P.910: Subjective video quality assessment methods for multimedia
applications.
This test deliberately doesn't exercise the full range of inputs described in
the committee draft VC-1 standard. It says:
input coefficients in frequency domain, D, satisfy -2048 <= D < 2047
intermediate coefficients, E, satisfy -4096 <= E < 4095
fully inverse-transformed coefficients, R, satisfy -512 <= R < 511
For one thing, the inequalities look odd. Did they mean them to go the
other way round? That would make more sense because the equations generally
both add and subtract coefficients multiplied by constants, including powers
of 2. Requiring the most-negative values to be valid extends the number of
bits to represent the intermediate values just for the sake of that one case!
For another thing, the extreme values don't look to occur in real streams -
both in my experience and supported by the following comment in the AArch32
decoder:
tNhalf is half of the value of tN (as described in vc1_inv_trans_8x8_c).
This is done because sometimes files have input that causes tN + tM to
overflow. To avoid this overflow, we compute tNhalf, then compute
tNhalf + tM (which doesn't overflow), and then we use vhadd to compute
(tNhalf + (tNhalf + tM)) >> 1 which does not overflow because it is
one instruction.
My AArch64 decoder goes further than this. It calculates tNhalf and tM
then does an SRA (essentially a fused halve and add) to compute
(tN + tM) >> 1 without ever having to hold (tNhalf + tM) in a 16-bit element
without overflowing. It only encounters difficulties if either tNhalf or
tM overflow in isolation.
I haven't had sight of the final standard, so it's possible that these
issues were dealt with during finalisation, which could explain the lack
of usage of extreme inputs in real streams. Or a preponderance of decoders
that only support 16-bit intermediate values in their inverse transforms
might have caused encoders to steer clear of such cases.
I have effectively followed this approach in the test, and limited the
scale of the coefficients sufficient that both the existing AArch32 decoder
and my new AArch64 decoder both pass.
Signed-off-by: Ben Avison <bavison@riscosopen.org>
Signed-off-by: Martin Storsjö <martin@martin.st>
Note that the benchmarking results for these functions are highly dependent
upon the input data. Therefore, each function is benchmarked twice,
corresponding to the best and worst case complexity of the reference C
implementation. The performance of a real stream decode will fall somewhere
between these two extremes.
Signed-off-by: Ben Avison <bavison@riscosopen.org>
Signed-off-by: Martin Storsjö <martin@martin.st>
tiny_ssim is built for the build host, not for the target platform.
Therefore, it mustn't include the config.h header, which is set up
specifically for the target platform and compiler.
This fixes cross building for older WinStore platforms, where
config.h contains "#define getenv(x) NULL".
Signed-off-by: Martin Storsjö <martin@martin.st>
This avoids unnecessary rebuilds of most source files if only the
list of enabled components has changed, but not the other properties
of the build, set in config.h.
Signed-off-by: Martin Storsjö <martin@martin.st>