Most formats do not support negative timestamps, shift them to avoid
unexpected behaviour and a number of bad crashes.
CC:libav-stable@libav.org
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
The sample is already included in the FATE suite, but is not tested
because cropping wasn't fully supported before.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Based on the 2007 GSoC project from Kamil Nowosad <k.nowosad@students.mimuw.edu.pl>
Updated to current programming standards, style and many more small
fixes by Diego Biurrun <diego@biurrun.de>.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
This reverts parts of d6d5ef5534, that didn't work right. (The
tests that were added failed on big endian, and the output looked
garbled on little endian as well.)
This is due to the fact that the intermediate scaling values (from
e.g. hScale8To19_c or hScale16To19_c) are stored as int32_t and
thus requires a separate output function, while yuv2gbrp_full_X_c
only interprets it as int16_t.
Signed-off-by: Martin Storsjö <martin@martin.st>
The FATE sample contains some pixels with value 0, but the palette
stored in the file contains only values from 16 up. Because the default
and cmdutils get_buffer() initialize the data to 0x80, they appear as
gray dots.
After this commit they change to black dots, which is probably still
incorrect but less visible and doesn't rely on get_buffer() initializing
the data.
This allows us to remove FF_IDCT_WMV2, which serves no practical purpose
other than to be able to select the WMV2 IDCT for MPEG (or vice versa)
and get corrupt output.
Fate tests for all wmv2-related tests change, because (for some obscure
reason) they forced use of the MPEG IDCT. You would get the same changes
previously by not using -idct simple in the fate test (or replacing it
with -idct auto).
Add some additional checks for EOF and print error messages on an incomplete
header or packet.
FATE reference updated for id-cin-video due to the demuxer no longer
returning a partial video packet at EOF.
Do not overwrite linesize set by get_buffer().
The last frame in the FATE test is not decoded anymore, since the file
is cut and a part of it is missing.
The initial testing of the VFW binary codec was flawed,
likely due to an AviSynth bug.
Re-testing using VirtualDub and various professional editing
applications has revealed it should have been flipped.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Each fate-seek test depends now only on the corresponding fate-acodec,
fate-vsynth2 or fate-lavf test which creates the file seek-tests
operates on. The tests and references are renamed to match the test they
depend on.
The following commit will make it useless.
The crop_scale_vflip FATE test changes because of off-by-one differences
in output when vflipped slices are passed to sws.
Otherwise during scaling it will try to interpret input in the wrong way and
that leads to the test results disagreeing on different platforms and with
different optimizations.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
This is consistent with stdio and is what we want to do in all cases.
Fixes a bug in the voc muxer which didn't flush in write_trailer()
previously. This is the cause of the change in the test results.
Previously, the value given to put_bits was 10 bits long for positive
predictors, even though 9 bits were to be written. The extra bit could
in some cases overwrite existing bits in the bitstream writer cache.
This fixes a failed assert in put_bits.h, when running a version
built with -DDEBUG.
The fate test result gets slightly improved, thanks to getting rid
of the overwritten bits in the bitstream writer cache.
Signed-off-by: Martin Storsjö <martin@martin.st>
The failures on various architectures and compilers on the RGB(A)
tests seem to have been because of one-off YCbCr->RGB conversion
results. This should make the conversion results match on most if
not all code paths.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
According to its description, it is supposed to be the LCM of all the
frame durations. The usability of such a thing is vanishingly small,
especially since we cannot determine it with any amount of reliability.
Therefore get rid of it after the next bump.
Replace it with the average framerate where it makes sense.
FATE results for the wtv and xmv demux tests change. In the wtv case
this is caused by the file being corrupted (or possibly badly cut) and
containing invalid timestamps. This results in lavf estimating the
framerate wrong and making up wrong frame durations.
In the xmv case the file contains pts jumps, so again the estimated
framerate is far from anything sane and lavf again makes up different
frame durations.
In some other tests lavf starts making up frame durations from different
frame.
MMX-enabled systems by default use some dsputil functions differing
from the C versions. Adding these flags ensures accurate ones are
used everywhere.
Signed-off-by: Mans Rullgard <mans@mansr.com>
Invented timestamps for the h264 tests return to something resembling
sanity.
In the idroq-video-encode test when converting 25 fps -> 30 fps the
fifth frame gets duplicated instead of the sixth.
diff -w is not a standard option. This fixes the reference files
to match what the tests actually output and switches to using the
standard diff -b which is sufficient to handle different line ending
styles.
Signed-off-by: Mans Rullgard <mans@mansr.com>