Originally written by Ronald S. Bultje <rsbultje@gmail.com> and
Clément Bœsch <u@pkh.me>
Further contributions by:
Anton Khirnov <anton@khirnov.net>
Diego Biurrun <diego@biurrun.de>
Luca Barbato <lu_zero@gentoo.org>
Martin Storsjö <martin@martin.st>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This changes the tests that used the internal hevc checksum to use framecrc
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Conflicts:
tests/fate/hevc.mak
tests/ref/fate/hevc-conformance-DBLK_A_SONY_3
tests/ref/fate/hevc-conformance-DBLK_B_SONY_3
tests/ref/fate/hevc-conformance-DBLK_C_SONY_3
tests/ref/fate/hevc-conformance-DELTAQP_B_SONY_3
tests/ref/fate/hevc-conformance-DELTAQP_C_SONY_3
tests/ref/fate/hevc-conformance-POC_A_Bossen_3
Merged-by: Michael Niedermayer <michaelni@gmx.at>
The tests are disabled as 2 do not pass yet
(fate-hevc-conformance-PPS_A_qualcomm_7 and fate-hevc-conformance-RAP_A_docomo_4)
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Here is an extract of fate-samples/sub/vobsub.idx, with an additional
text at the end of each line to better identify each bitmap:
timestamp: 00:04:55:445, filepos: 00001b000 Ace!
timestamp: 00:05:00:049, filepos: 00001b800 Wake up, honey!
timestamp: 00:05:02:018, filepos: 00001c800 I gotta go to work.
timestamp: 00:05:02:035, filepos: 00001d000 <???>
timestamp: 00:05:04:203, filepos: 00001d800 Look after Clayton, okay?
timestamp: 00:05:05:947, filepos: 00001e800 I'll be back tonight.
timestamp: 00:05:07:957, filepos: 00001f800 Bye! Love you.
timestamp: 00:05:21:295, filepos: 000020800 Hey, Ace! What's up?
timestamp: 00:05:23:356, filepos: 000021800 Hey, how's it going?
timestamp: 00:05:24:640, filepos: 000022800 Remember what today is? The 3rd!
timestamp: 00:05:27:193, filepos: 000023800 Look over there!
timestamp: 00:05:28:369, filepos: 000024800 Where are they going?
timestamp: 00:05:28:361, filepos: 000025000 <???>
timestamp: 00:05:29:946, filepos: 000025800 Let's go see.
timestamp: 00:05:31:230, filepos: 000026000 I can't, man. I got Clayton.
Note the two "<???>": they are basically split subtitles (with the
previous one), which the dvdsub decoder is now supposed to reconstruct
with a previous commit. But also note that while the first chunk has
increasing timestamps,
timestamp: 00:05:02:018, filepos: 00001c800
timestamp: 00:05:02:035, filepos: 00001d000
...it's not the case of the second one (and this is not an exception in the
original file):
timestamp: 00:05:28:369, filepos: 000024800
timestamp: 00:05:28:361, filepos: 000025000
For the dvdsub decoder, they need to be "filepos'ed" ordered, but the
FFDemuxSubtitlesQueue is timestamps ordered, which is the reason of the
introduction of a sub sort method in the context, to allow giving
priority to the position, and then the timestamps. With that change, the
dvdsub decoder get fed with ordered packets.
Now the packet size estimation was also broken: the filepos differences
in the vobsub index defines the full data read between two subtitles
chunks, and it is necessary to take into account what is read by the
mpegps_read_pes_header() function since the length returned by that
function doesn't count the size of the data it reads. This is fixed with
the introduction of total_read, and {old,new}_pos. By doing this change,
we can drop the unreliable len16 heuristic and simplify the whole loop.
Note that mpegps_read_pes_header() often read more than one PES packet
(typically in one call it can read 0x1ba and 0x1be chunk along with the
relevant 0x1bd packet), which triggers the "total_read + pkt_size >
psize" check. This is an expected behaviour, which could be avoided by
having a more chunked version of mpegps_read_pes_header().
The latest change is the extraction of each stream into its own
subtitles queue. If we don't do this, the maximum size for a subtitle
chunk is broken, and the previous changes can not work. Having each
stream in a different queue requires some little adjustments in the
seek code of the demuxer.
This commit is only meaningful as a whole change and can not be easily
split. The FATE test changes because it uses the vobsub demuxer.
Fixes sync in some samples (e.g. bugs 7581 and 8374 in VLC).
Based on a commit by Matthieu Bouron <matthieu.bouron@gmail.com>
Reported-by: Jean-Baptiste Kempf <jb@videolan.org>
CC: libav-stable@libav.org
This may improve compatibility of lgpegs generated by libavcodec
also encoded ljpegs become slightly smaller
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
* commit 'f1eac2b8a0370b908cd691086d11f51342054730':
movenc: Use keyframes as default fragmentation point in ismv
Merged-by: Michael Niedermayer <michaelni@gmx.at>
For codecs where decoding of a whole plane can simply
be skipped, we should offer applications to not decode
alpha for better performance (ca. 30% less CPU usage
and 40% reduced memory bandwidth).
It also means applications do not need to implement support
(even if it is rather simple) for YUVA formats in order to be
able to play these files.
Signed-off-by: Reimar Döffinger <Reimar.Doeffinger@gmx.de>
Use it only on subtitle CuePoints.
With proper demuxer/splitter support this should improve the display
of subtitles right after seeking to a given point in the stream.
Signed-off-by: James Almer <jamrial@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Files won't validate with mkvalidtor if these two elements are missing.
Use a const "Lavf" string that wont change with library version bumps.
Signed-off-by: James Almer <jamrial@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
The muxer has been creating files with v4 elements for some time now,
and especially now that we can mux non-experimental Opus files, reporting
the DocTypeVersion as 2 is not correct.
Signed-off-by: James Almer <jamrial@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
The element was only being written when the value == 1. But the default
value of this element is 1, so this has no useful effect. This element
needs to be written when the value == 0.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
The fate tests change as they used 1.2 previously
The increased size is due to:
32bit CRCs per slice by default (can be disabled),
it adds slice headers to allow decoding one slice without the others
an additional slice size field is added to make it possible to find
slices within corrupted surroundings.
these add up to about 57bit per slice more
at 50 frames and 4 slices thats 1425 byte
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
QuickTime will play multiple audio tracks concurrently if this flag is
set for multiple audio tracks. And if no subtitle track has this flag
set, QuickTime will show no subtitles in the subtitle menu.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Update the fate reference since the last broken frame is not decoded
anymore.
Reported-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
CC: libav-stable@libav.org
The bug it was working seems to have been fixed.
This change causes ffmpeg to use the trim filter to implement
the -t option.
FATE tests are updated due to the more accurate handling of
the last packets.
for the n0=0 case there are multiple solutions and different
platforms pick different ones
This should reduce the issues with fate and the timefilter test
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
The option is used to sort the streams by program.
Signed-off-by: Florent Tribouilloy <florent.tribouilloy@smartjog.com>
Signed-off-by: Stefano Sabatini <stefasab@gmail.com>
This is a minimal change to matroskaenc that implements CueRelativePosition in the output.
Most players will probably ignore this additional information, but it is in the
matroska spec, and it'd be nice to be able to make use of it.
Signed-off-by: Bernt Habermeier <bernt@wulfram.com>
Tested-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Tags must have at least one SimpleTag element to be spec conformant.
Updated lavf-mkv and seek-lavf-mkv FATE references as the tests were affected by
this.
Fixes ticket #2785
Signed-off-by: James Almer <jamrial@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
When operating on subsampled chroma planes, some rounding is taking
place. The left and top borders are rounded down while the width and
height are rounded up, so all rounding is done outward to guarantee the
logo area is fully covered.
The problem is that the width and height are counted from the
unrounded left and top borders, respectively. So if the left or top
border position has indeed been rounded down, and the width or height
needs no rounding (up), the position of the the right or bottom border
will be effectively rounded down, i.e. inward.
The issue can easily be seen with a yuv240p input and
-vf delogo=45:45:60:40:show=1 -vframes 1 delogo-bug.png
(or virtually any logo area with odd x and y and even width and
height.) The right and bottom chroma borders (in green) are clearly
off.
In order to fix this, the width and height must be adjusted to include
the bits lost in the rounding of the left and top border positions,
respectively, prior to being themselves rounded up.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
The original delogo algorithm interpolates both horizontally and
vertically and uses the average to compute the resulting sample. This
works reasonably well when the logo area is almost square. However
when the logo area is significantly larger than high or higher than
large, the result is largely suboptimal.
The issue can be clearly seen by testing the delogo filter with a fake
logo area that is 200 pixels large and 2 pixels high. Vertical
interpolation gives a very good result in that case, horizontal
interpolation gives a very bad result, and the overall result is poor,
because both are given the same weight.
Even when the logo is roughly square, the current algorithm gives poor
results on the borders of the logo area, because it always gives
horizontal and vertical interpolations an equal weight, and this is
suboptimal on borders. For example, in the middle of the left hand
side border of the logo, you want to trust the left known point much
more than the right known point (which the current algorithm already
does) but also much more than the top and bottom known points (which
the current algorithm doesn't do.)
By properly weighting each known point when computing the value of
each interpolated pixel, the visual result is much better, especially
on borders and/or for high or large logo areas.
The algorithm I implemented guarantees that the weight of each of the
4 known points directly depends on its distance to the interpolated
point. It is largely inspired from the original algorithm, the key
difference being that it computes the relative weights globally
instead of separating the vertical and horizontal interpolations and
combining them afterward.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Stefano Sabatini <stefasab@gmail.com>
Also replace custom tests for MD5 with those published in RFC 2202
Signed-off-by: James Almer <jamrial@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Tha fate tests change because the edge mirroring was wrong before this commit
Reviewed-by: Nicolas BERTRAND <nicoinattendu@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Adding an arbitrary amount of padding bytes at the end of the
ID3 metadata fixes cover art display for some software (iTunes,
Traktor, Serato, Torq).
For reference (ID3 metadata):
[ Apic frames ] -> cover doesn't show up
[ Apic frames, Padding ] -> ok
[ Apic frames, ID3 frames ] -> ok
[ ID3 frames, Apic frames ] -> cover doesn't show up
[ ID3 frames, Apic frames, Padding ] -> ok
The quantization code needs more work, not so much work
merging but more work investigating what is correct.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
This more evenly distributes the load between threads
This also fixes the chroma filtering where the filter was applied twice
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Fixes out of array writes
No FFmpeg release is affected by this
This also fixes some artifacts
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
* commit '8e673efc6f5b7a095557664660305148f2788d30':
prores: update FATE test to account for alpha plane present in the test sample
configure: Add basic valgrind-massif support
Conflicts:
tests/fate/prores.mak
tests/ref/fate/prores-alpha
Merged-by: Michael Niedermayer <michaelni@gmx.at>
According to the PIFF specification[1] the base_data_offset field MUST be
omitteed. See section 5.2.17. Since the ISMV files created by ffmpeg state
that they are 'piff' compatible via 'ftyp' box, this needs to be corrected.
[1] http://www.iis.net/learn/media/smooth-streaming/protected-interoperable-file-format
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
This replaces a large number of checks for the second field by
fixing the pointers when they are setup.
This should also fix I/BI field pictures.
Changes checksums for vc1_sa10143, the file becomes slightly closer
to what the reference decoder outputs.
Based on "vc1dec: the second field is written wrong to the picture"
by Sebastian Sandberg <sebastiand.sandberg@gmail.com>.
Signed-off-by: Martin Storsjö <martin@martin.st>
This is the first 2 MB of the official test7.mkv.
That length seems to be enough to detect the bugs
we had in our code so far.
Signed-off-by: Reimar Döffinger <Reimar.Doeffinger@gmx.de>
* commit 'e036bb7899d0faca9159206be9bf5552e76e7633':
lavc: clear AVBuffers on decoded frames if refcounted_frames is not set
FATE: add an additional indeo3 test
Merged-by: Michael Niedermayer <michaelni@gmx.at>
This makes -t sample-accurate for audio and will allow further
simplication in the future.
Most of the FATE changes are due to audio now being sample accurate. In
some cases a video frame was incorrectly passed with the old code, while
its was over the limit.