Fixes division by 0 in fate-acodec-ra144
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 635b2ec5f2)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Avoids unexpected occurance and dependency on NaN behavior and divisions by 0
Testcase: fate-lavf-fate-avi_cram
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 6085d6b2ae)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit feeb3a9261)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 2f76157eb0)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This fixes the sum of the integer coefficients ending up summing to a value
larger than the value representing unity.
This issue occurs with qN0.dts when converting to stereo
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 7fe81bc4f8)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Will Kelleher <wkelleher@gogoair.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 964f07f68e)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Leaking this private structure opens up the possibility that it may
be re-used when parsing later packets in the stream. This is
problematic if the later packets are not the same codec type (e.g.
private allocated during Vorbis parsing, but later packets are Opus
and the private is assumed to be the oggopus_private type in
opus_header()).
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 542f725964)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Larger values would imply file durations of astronomic proportions and cause
overflows
Fixes integer overflow
Fixes: usan_int64_overflow
Found-by: Thomas Guilbert <tguilbert@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 8efaee3710)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: IMG-20160418-WA0002.jpg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit deaf58abf2)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes Ticket5443
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 11db7eee9b)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes overreads.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit b2244fa0a6)
This didn't actually check if sync word was found and always errored out
with "-err_detect explode" option enabled.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit ce2f9fdb0a)
Note:- backporting commit 15ef98afd1 from head
Signed-off-by: Shivraj Patil <shivraj.patil@imgtec.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Note:- backporting commit ad16eff64b from head
Understanding the mips32r6 and mips64r6 ISAs in the configure script is
not enough. In order to have full support for MIPS R6 in FFmpeg we need
to be able to build it, and for that we need to make sure we don't use
incompatible assembler code which makes the build fail. Ifdefing the
offending code is sufficient to fix the problem.
Signed-off-by: Vicente Olivert Riera <Vincent.Riera@imgtec.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Functionality used before didn't widen the values from limited to
full range. Additionally, now the decoder uses BT.709 where it
should be used according to the video resolution.
Default for not yet set colorimetry is BT.709 due to most observed
HDMV content being HD.
BT.709 coefficients were gathered from the first two parts of BT.709
to BT.2020 conversion guide in ARIB STD-B62 (Pt. 1, Chapter 6.2.2).
They were additionally confirmed by manually calculating values.
Fixes#4637
(cherry picked from commit 9779b62624)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes Ticket5319
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 9ac154d1fa)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes out of array read
Fixes: mozilla bug 1266129
Found-by: Tyson Smith
Tested-by: Tyson Smith
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 9f36ea57ae)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Sometimes video fails to decode if H.264 configuration changes mid stream.
The reason is that configuration parser assumes that nal_ref_idc is equal to 11b
while actually some codecs but 01b there. The H.264 spec is somewhat
vague about this but it looks like it allows any non-zero nal_ref_idc for sps/pps.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 3a727606c4)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes Ticket4816
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit d433623fba)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Currently, if the movie source filter is used and a seek_point is
specified on a file that has a negative start time, ffmpeg will fail.
An easy way to reproduce this is as follows:
$ ffmpeg -vsync passthrough -filter_complex 'color=d=10,setpts=PTS-1/TB' test.mp4
$ ffmpeg -filter_complex 'movie=filename=test.mp4:seek_point=2' -f null -
The problem is caused by checking for int64_t overflow the wrong way.
In general, to check whether a + b overflows, it is not enough to do:
a > INT64_MAX - b
because b might be negative; the correct way is:
b > 0 && > a > INT64_MAX - b
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit c1f9734f97)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes segfault
Fixes Ticket5333
Regression since bfc8a4dabe
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 8f2a1990c0)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This is ~2x faster for y not an integer on Haswell+GCC, and should
generally be faster due to the fact that anyway powf essentially does
this under the hood. Made an inline function in lavu/internal.h for this
purpose.
Note that there are some accuracy differences, that should generally be
negligible. In particular, FATE still passes on this platform.
Results in ~ 7% speedup in aac encoding with -march=native, Haswell+GCC.
before:
ffmpeg -i sin.flac -acodec aac -y sin_new.aac 6.05s user 0.06s system 104% cpu 5.821 total
after:
ffmpeg -i sin.flac -acodec aac -y sin_new.aac 5.67s user 0.03s system 105% cpu 5.416 total
This is also faster than an alternative approach that pulls in powf, gets rid of
the crufty NaN checks and other special cases, exploits knowledge about the intervals, etc.
This of course does not exclude smarter approaches; just suggests that
there would need to be significant work on this front of lower utility than
searches for hotspots elsewhere.
Reviewed-by: Reimar Döffinger <Reimar.Doeffinger@gmx.de>
Reviewed-by: Ronald S. Bultje <rsbultje@gmail.com>
Signed-off-by: Ganesh Ajjanagadde <gajjanag@gmail.com>
(cherry picked from commit bccc81dfa0)
This ensures gcc does not create unnecessary
loads or stores and possibly even does not vectorize
the negation.
Speeds up mp3 to aac transcoding with default settings
by 10% when using "gcc (Debian 5.3.1-10) 5.3.1 20160224".
Signed-off-by: Reimar Döffinger <Reimar.Doeffinger@gmx.de>
(cherry picked from commit b60dfae7af)
I cannot see any point whatsoever to use
double here instead of float, the results
are likely identical in all cases..
Using float allows for much more
efficient use of SIMD.
Signed-off-by: Reimar Döffinger <Reimar.Doeffinger@gmx.de>
(cherry picked from commit 0a04c2885f)
It makes no sense whatsoever to do this at each function call; we
already have a table for this.
Yields a 2x improvement in find_min_book (x86-64, Haswell+GCC):
ffmpeg -i sin.flac -acodec aac -y sin.aac
find_min_book
old
605 decicycles in find_min_book, 8388453 runs, 155 skips.9x
606 decicycles in find_min_book,16776912 runs, 304 skips.9x
607 decicycles in find_min_book,33553819 runs, 613 skips.2x
607 decicycles in find_min_book,67107668 runs, 1196 skips.3x
607 decicycles in find_min_book,134215360 runs, 2368 skips3x
new
359 decicycles in find_min_book, 8388552 runs, 56 skips.3x
360 decicycles in find_min_book,16777112 runs, 104 skips.1x
361 decicycles in find_min_book,33554218 runs, 214 skips.4x
361 decicycles in find_min_book,67108381 runs, 483 skips.5x
361 decicycles in find_min_book,134216725 runs, 1003 skips5x
and more importantly a non-negligible speedup (~ 8%) to overall AAC encoding:
old:
ffmpeg -i sin.flac -acodec aac -strict -2 -y sin_new.aac 6.82s user 0.03s system 104% cpu 6.565 total
new:
ffmpeg -i sin.flac -acodec aac -strict -2 -y sin_old.aac 6.24s user 0.03s system 104% cpu 5.993 total
This also improves accuracy of the expression by ~ 2 ulp in some cases.
Reviewed-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Reviewed-by: Rostislav Pehlivanov <atomnuker@gmail.com>
Signed-off-by: Ganesh Ajjanagadde <gajjanag@gmail.com>
(cherry picked from commit bd9c58756a)
Reviewed-by: maintainer
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 0cd9ff4e3a)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Original mail and my own followup on ffmpeg-user earlier today:
I have a device sending out a MJPEG/RTP stream on a low quality setting.
Decoding and displaying the video with libavformat results in a washed
out, low contrast, greyish image. Playing the same stream with VLC results
in proper color representation.
Screenshots for comparison:
http://zevv.nl/div/libav/shot-ffplay.jpghttp://zevv.nl/div/libav/shot-vlc.jpg
A pcap capture of a few seconds of video and SDP file for playing the
stream are available at
http://zevv.nl/div/libav/mjpeg.pcaphttp://zevv.nl/div/libav/mjpeg.sdp
I believe the problem might be in the calculation of the quantization
tables in the function create_default_qtables(), the attached patch
solves the issue for me.
The problem is that the argument 'q' is of the type uint8_t. According to the
JPEG standard, if 1 <= q <= 50, the scale factor 'S' should be 5000 / Q.
Because the create_default_qtables() reuses the variable 'q' to store the
result of this calculation, for small values of q < 19, q wil subsequently
overflow and give wrong results in the calculated quantization tables. The
patch below uses a new variable 'S' (same name as in RFC2435) with the proper
range to store the result of the division.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit e3e6a2cff4)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>