This reverts parts of d6d5ef5534, that didn't work right. (The
tests that were added failed on big endian, and the output looked
garbled on little endian as well.)
This is due to the fact that the intermediate scaling values (from
e.g. hScale8To19_c or hScale16To19_c) are stored as int32_t and
thus requires a separate output function, while yuv2gbrp_full_X_c
only interprets it as int16_t.
Signed-off-by: Martin Storsjö <martin@martin.st>
The FATE sample contains some pixels with value 0, but the palette
stored in the file contains only values from 16 up. Because the default
and cmdutils get_buffer() initialize the data to 0x80, they appear as
gray dots.
After this commit they change to black dots, which is probably still
incorrect but less visible and doesn't rely on get_buffer() initializing
the data.
This allows us to remove FF_IDCT_WMV2, which serves no practical purpose
other than to be able to select the WMV2 IDCT for MPEG (or vice versa)
and get corrupt output.
Fate tests for all wmv2-related tests change, because (for some obscure
reason) they forced use of the MPEG IDCT. You would get the same changes
previously by not using -idct simple in the fate test (or replacing it
with -idct auto).
Add some additional checks for EOF and print error messages on an incomplete
header or packet.
FATE reference updated for id-cin-video due to the demuxer no longer
returning a partial video packet at EOF.
Do not overwrite linesize set by get_buffer().
The last frame in the FATE test is not decoded anymore, since the file
is cut and a part of it is missing.
The initial testing of the VFW binary codec was flawed,
likely due to an AviSynth bug.
Re-testing using VirtualDub and various professional editing
applications has revealed it should have been flipped.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Each fate-seek test depends now only on the corresponding fate-acodec,
fate-vsynth2 or fate-lavf test which creates the file seek-tests
operates on. The tests and references are renamed to match the test they
depend on.
The following commit will make it useless.
The crop_scale_vflip FATE test changes because of off-by-one differences
in output when vflipped slices are passed to sws.
Otherwise during scaling it will try to interpret input in the wrong way and
that leads to the test results disagreeing on different platforms and with
different optimizations.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
This is consistent with stdio and is what we want to do in all cases.
Fixes a bug in the voc muxer which didn't flush in write_trailer()
previously. This is the cause of the change in the test results.
Previously, the value given to put_bits was 10 bits long for positive
predictors, even though 9 bits were to be written. The extra bit could
in some cases overwrite existing bits in the bitstream writer cache.
This fixes a failed assert in put_bits.h, when running a version
built with -DDEBUG.
The fate test result gets slightly improved, thanks to getting rid
of the overwritten bits in the bitstream writer cache.
Signed-off-by: Martin Storsjö <martin@martin.st>
The failures on various architectures and compilers on the RGB(A)
tests seem to have been because of one-off YCbCr->RGB conversion
results. This should make the conversion results match on most if
not all code paths.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
According to its description, it is supposed to be the LCM of all the
frame durations. The usability of such a thing is vanishingly small,
especially since we cannot determine it with any amount of reliability.
Therefore get rid of it after the next bump.
Replace it with the average framerate where it makes sense.
FATE results for the wtv and xmv demux tests change. In the wtv case
this is caused by the file being corrupted (or possibly badly cut) and
containing invalid timestamps. This results in lavf estimating the
framerate wrong and making up wrong frame durations.
In the xmv case the file contains pts jumps, so again the estimated
framerate is far from anything sane and lavf again makes up different
frame durations.
In some other tests lavf starts making up frame durations from different
frame.
MMX-enabled systems by default use some dsputil functions differing
from the C versions. Adding these flags ensures accurate ones are
used everywhere.
Signed-off-by: Mans Rullgard <mans@mansr.com>
Invented timestamps for the h264 tests return to something resembling
sanity.
In the idroq-video-encode test when converting 25 fps -> 30 fps the
fifth frame gets duplicated instead of the sixth.
diff -w is not a standard option. This fixes the reference files
to match what the tests actually output and switches to using the
standard diff -b which is sufficient to handle different line ending
styles.
Signed-off-by: Mans Rullgard <mans@mansr.com>
The codec (adpcm-ima-ws) is tested elsewhere. Using framecrc output
provides more information than a single md5 if something goes wrong.
Signed-off-by: Mans Rullgard <mans@mansr.com>
Converting the double to float for lrintf() loses precision when
the value is not exactly representable as a single-precision float.
Apart from being inaccurate, this causes discrepancies in some
configurations due to differences in rounding.
Note that the changed timestamp in the vc1-ism test is a bogus,
made-up value.
Signed-off-by: Mans Rullgard <mans@mansr.com>
This partially reverts acb1730218
which would only have needed to change the checksums if channel mixing had
been properly avoided. This changes the output file size reference and the
seek test reference back to the previous values.
Reduces the amount of upfront data required for cluster parsing
thus decreasing latency on seek and startup.
The change in the seek-lavf_mkv FATE test is due to incremental
parsing no longer reading as much data as the old parser and
thus not having that additional data to generate index entries
based on keyframes. Index entries are added correctly as the
file is parsed.
All FATE tests pass and Chrome has been using this patch for ~6
months without issue.
Currently incremental parsing is not supported for files with
SSA tracks since they require merging packets between clusters.
In this case the code falls back to non-incremental parsing.
Signed-off-by: Aaron Colwell <acolwell@chromium.org>
Signed-off-by: Dale Curtis <dalecurtis@chromium.org>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Avoids resampling and channel mixing. This only tests the behavior
with respect to input and output audio rather than also testing changes
to the encoder or muxer that do not affect the resulting decoded output.
Avoids resampling and channel mixing. This only tests the behavior
with respect to input and output audio rather than also testing changes
to the encoder or muxer that do not affect the resulting decoded output.
This way we don't require a clearly defined corresponding input stream.
The result for the xwd test changes because rgb24 is now chosen instead
of bgra.
If either input or output layout is known and the channel counts match,
use the known layout for both. Otherwise choose the default layout based on
av_get_default_channel_layout().
Changed some FATE references due to some WAVE files now having a non-zero
channel mask.
Also, do not give AVCodecContext.frame_size priority for muxing.
Updated 2 FATE references:
dxa-feeble - adds 1 audio frame that is still within 2 seconds as specified
by -t 2 in the FATE test
wmv8-drm-nodec - durations are not needed. previously they were estimated
using the packet size and average bit rate.
It is unnecessary. Also, for some codecs we're reading more than 1 frame per
packet. Instead we use a private context variable to calculate the bit rate,
stream duration, and packet durations.
Updated FATE seek test, which has slightly different timestamps due to a
more accurate bit rate calculation.
Split off packet parsing into a separate function. Parse full packets at
once and store them in a queue, eliminating the need for tracking
parsing state in AVStream.
The horrible unreadable loop in read_frame_internal() now isn't weirdly
ordered and doesn't contain evil gotos, so it should be much easier to
understand.
compute_pkt_fields() now invents slightly different timestamps for two
raw vc1 tests, due to has_b_frames being set a bit later. They shouldn't
be more wrong (or right) than previous ones.
We need to set ms_stereo in encode_init() in order to avoid incorrectly
encoding the first frame as non-m/s while flagging it as m/s. Fixes an
uncomfortable pop in the left channel at the start of playback.
CC:libav-stable@libav.org
Fixes timestamp calculation.
The FATE reference is updated because timestamp calculations are now more
accurate. Previous timestamps were based on average bit rate.
This fixes clipping if the encoder input used the full 16 bit
input range (samples with a magnitude below 16383 worked fine).
The filtered subband samples should be 15 bit maximum, while
the code earlier produced them scaled to 16 bit.
This makes the decoder output have double the magnitude
compared to before.
The spec reference samples doesn't test the QMF at all, which
was why this part slipped past initially.
Signed-off-by: Martin Storsjö <martin@martin.st>
The container has no timestamps and the framerate isn't stored in the
data either.
The decoder sets codec timebase to experimentally found value 1/15. Do
the same for the demuxer too, it should at least be better than the
default 1/90000.
ProRes codes chroma blocks in 444 mode in different order than luma blocks,
so make both decoder and encoder read/write chroma blocks in right order.
Reported by Phil Barrett
WavPack has a comprehensive test suite, and a bunch
of corner cases.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>
r_frame_rate should in theory have something to do with input framerate,
but in practice it is often made up from thin air by lavf. So unless we
are targeting a constant output framerate, it's better to just use input
stream timebase.
Brings back dropped frames in nuv and cscd tests introduced in
cd1ad18a65
It is not supposed to be done outside lavc.
This is basically a revert of 818062f2f3.
It is unclear what issue this was supposed to fix, if it reappears again
it will have to be fixed in a more proper place.
The wtv-demux test change is because the sample starts with a B-frame.
According to unofficial documentation, the video rate is locked to the audio
sample rate. This results in proper synchronization of audio and video
timestamps from the demuxer. This only works if the first audio packet occurs
before the first video packet or the audio sample rate is the default rate of
11111 Hz, both of which are true for all samples in our archive.
Update FATE reference to account for now non-existent palette packet.
This also fixes the FATE test if frame data is not initialized in
get_buffer(), so update comment in avconv accordingly.
This changes a number of FATE results, since before this commit, the
timestamps in all tests using rawenc were made up by lavf.
In most cases, the previous timestamps were completely bogus.
In some other cases -- raw formats, mostly h264 -- the new timestamps
are bogus as well. The only difference is that timestamps invented by
the muxer are replaced by timestamps invented by the demuxer.
cscd -- avconv sets output codec timebase from r_frame_rate
and r_frame_rate is in this case some guessed number 31.42 (377/12),
which is not accurate enough to represent all timestamps. This results
in some frames having duplicate pts. Therefore, vsync 0 needs to be
changed to vsync 2 and avconv drops two frames. A proper fix in the
future would be to set output timebase to something saner in avconv.
nuv -- previous timestamps for video were wrong AND the cscd
comment applies, one frame is dropped.
vp8-signbias -- the file contains two frames with identical timestamps,
so -vsync 0 needs to be removed/changed to -vsync 2 and avconv drops one
frame.
vc1-ism -- apparrently either the demuxer lies about timestamps or the
file is broken, since dts == pts on all packets, but reordering clearly
takes place.
Current code compares the desired recording time with InputStream.pts,
which has a very unclear meaning. Change the code to use actual
timestamps of the frames passed to the encoder.
In several tests, one less frame is encoded, which is more correct.
In the idroq test one more frame is encoded, which is again more
correct.
Behavior with stream copy should be unchanged.
The output is obviously not supposed to contain video (since only
-acodec copy is specified), but that only happens because of the way -t
handling is implemented currently.
Right now those muxers use the default timebase in all cases(1/90000).
This patch avoid unnecessary rescaling and makes the printed timestamps
more readable.
Also, extend the printed information to include the timebases and packet
pts/duration and align the columns.
Obviously changes the results of all fate tests which use those two
muxers.
Return the correct number of consumed bytes and set *data_size = 0.
Returned size is 1 too small, leading to that 1 byte being read as the next
frame, which results in an extra blank frame at the beginning of the stream.
get_ue_golomb_long() is only tested for values up to 2^15 - 2 since
we can not write larger values.
Silence the test on success and return a non-zero value on error.
Use an heap scratch buffer instead of large stack buffer.
Remove unneeded includes.
This uses the old demuxing code for OP1a and separate demuxing code for OPAtom.
Timestamp output is added to the old demuxing code.
The seeking code is made to seek to the start of the desired EditUnit only,
from which the normal demuxing code takes over (if OP1a). This means we
do not use delta entries or slices, only StreamOffsets. OPAtom seeking
basically works like before.
This also makes D-10 seeking behave the same way as OP1a and OPAtom. In other
words, we allow seeking before the start or past the end for D-10 too.
Based on several patches by Tomas Härdin <tomas.hardin@codemill.se> and
Reimar Döffinger <Reimar.Doeffinger@gmx.de>.
Changed av_calloc to av_mallocz, added overflow checks.
(Does not attempt to decode percetual audio data inside.)
Code coverage: libavformat/xwma.c: 3% -> 75%
Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>
(Don't attempt to decode JPEG data.)
Code coverage: libavformat/smjpeg.c: 0% -> 69%
libavcodec/adpcm.c: 0% -> 10% (fresh run); 92.4% -> 93% following a FATE run
Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>
The previous sample used for this test only contained type 0 frames.
Replace it with a sample that also features type 1 frames.
Code coverage:
libavcodec/xxan.c: 72% -> 89%
Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>
Palette is as supposed in native endianness. Converting the pal8 output
to rgb24 is thus necessary for identical CRCs on big and little endian
systems.
Some libavifilter tests use NUT as output even if the produced
files were not decodable. The support for 10bit introduced in
432f0e5b7d and 91b1e6f0c changed the hashes.
The sample has an incomplete last frame. Decoding it is pointless.
The garbage produced was changed by the bitstream reader now
protecting against over-reads.
Signed-off-by: Mans Rullgard <mans@mansr.com>
This patch is a generalization of what Michael Niedermayer
fixed in a single case.
The wmv8-drm fate test had been updated accordingly.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Use Sound Sample Description Version 2 for all MOV files.
Updated FATE references accordingly.
Note that ADPCM is treated as compressed audio in version 2.
AVFMT_NOTIMESTAMPS for crc, as it ignores the timestamps.
AVFMT_VARIABLE_FPS for framecrc, as it prints dts.
Many FATE changes, because avconv is no longer duplicating frames in
those tests.
Also added -vsync 0 for some tests to prevent avconv from dropping
frames until it can be fixed more properly.
The Zork PCM decoder does not decode the 1 sample we have correctly, therefore
the encoder based on the decoder is also incorrect. There is no good reason to
keep the encoder.