Dirac internally allocates 5 images per plane and frame currently. One being the actual
image the other 4 being filtered for motion compensation.
Fixes: Out of memory
Fixes: 12870/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_DIRAC_fuzzer-5684825871089664
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
autorotate is enabled by default in ffmpeg so the rotation filters
are required and will be attempted for insertion without the user's
knowledge if an input stream has rotation side-data.
Fixes: runtime error: signed integer overflow: 2147483598 + 128 cannot be represented in type 'int'
Fixes: 12926/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_JPEG2000_fuzzer-5705100733972480
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
After this change we always parse the full specifier even if we know the result
in the middle of the parsing. Sligtly slower, but this is needed to
consistently reject incorrect specifiers in both matching and non-matching
cases.
Signed-off-by: Marton Balint <cus@passwd.hu>
This reworks the code to be more strict about accepting stream specifiers. From
now on we strictly enforce the syntax in the documentation up until the
decisive part of the stream specifier. Therefore matching stream specifiers
always need to be correct, non matching specifiers only need to be correct
until the decisive part.
Also recursion is changed to a simple loop.
Signed-off-by: Marton Balint <cus@passwd.hu>
This improves compatibility with some consumer (LG WebOS) TVs which apparently
search a HEVC descriptor (which our mpegts muxer can't generate) or a format
identifier.
Since the HEVC format identifier is not registered (but used in the wild), it is
not written if strict_std_compliance is higher than normal.
This fixes the issue in ticket #7744.
Signed-off-by: Marton Balint <cus@passwd.hu>
With all of our existing users of cuda_sdk switched over to ffnvcodec,
we could remove cuda_sdk completely and say that we should no longer
add code that requires the full sdk, and rather insist that such code
only use ffnvcodec.
As discussed previously, the use of nvcc from the sdk is still
supported with a distinct option.
Signed-off-by: Philip Langdale <philipl@overt.org>
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
This change switches the vf_thumbnail_cuda filter from using the
full cuda sdk to using the ffnvcodec headers and loader.
Most of the change is a direct mapping, but I also switched from
using texture references to using texture objects. This is supposed
to be the preferred way of using textures, and the texture object API
is the one I added to ffnvcodec.
Signed-off-by: Philip Langdale <philipl@overt.org>
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
This change switches the vf_scale_cuda filter from using the
full cuda sdk to using the ffnvcodec headers and loader.
Most of the change is a direct mapping, but I also switched from
using texture references to using texture objects. This is supposed
to be the preferred way of using textures, and the texture object API
is the one I added to ffnvcodec.
Signed-off-by: Philip Langdale <philipl@overt.org>
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
This change switches the vf_thumbnail_cuda filter from using the
full cuda sdk to using the ffnvcodec headers and loader.
Signed-off-by: Philip Langdale <philipl@overt.org>
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
The use of nvcc to compile cuda kernels is distinct from the use of
cuda sdk libraries and linking against those libraries. We have
previously not bothered to distinguish these two cases because all
the filters that used cuda kernels also used the sdk. In the following
changes, I'm going to remove the sdk dependency from those filters,
but we need a way to ensure that nvcc is present and functioning, and
also a way to explicitly disable its use so that the filters are not
built.
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
Empty edits can occur at any position within the edit list except for at
the end. Empty edits in the middle should not impact the reported stream
start_time or the video PTS adjustment, so only include empty edits at
the start of the list in empty_edits_sum_duration.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This avoids making invalid HTTP Range requests for a byte range past the
known end of the file during a seek. Those requests generally return a HTTP
response of 416 Range Not Satisfiable, which causes an error response.
Reference: https://tools.ietf.org/html/rfc7233
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Doesn't change anything, but makes the behaviour better match that of the
other codecs (the CONSTANT_QUALITY_ONLY flag already ensures that CQP is
the only RC mode selectable for MJPEG).
Following b8c45bbcbc they contain allocated
unit arrays which will get leaked. These operations were inconsistently
applied and never actually needed (the old uninit left them in the correct
state), so just drop them entirely.
Currently, a fragment's unit array is constantly reallocated during
splitting of a packet. This commit changes this: One can keep the units
array by distinguishing between the number of allocated and the number
of valid units in the units array.
The more units a packet is split into, the bigger the benefit.
So MPEG-2 benefits the most; for a video coming from an NTSC-DVD
(usually 32 units per frame) the average cost of cbs_insert_unit (for a
single unit) went down from 6717 decicycles to 450 decicycles (based
upon 10 runs with 4194304 runs each); if each packet consists of only
one unit, it went down from 2425 to 448; for a H.264 video where most
packets contain nine units, it went from 4431 to 450.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@googlemail.com>
This is in preparation for another patch that will stop needless
reallocations of the unit array.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@googlemail.com>
Improves speed of the testcase by about a factor of 10
Fixes: Timeout
Fixes: 13132/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_WCMV_fuzzer-5664190616829952
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Speeds up error cases
Fixes: 13132/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_WCMV_fuzzer-5664190616829952
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This speeds up the testcase by a factor of 4
Fixes: Timeout
Fixes: 13100/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_WMV2_fuzzer-5767533905313792
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Unifying the way the EBML unknown length is signaled, rather than using two
incompatible values. UINT64_MAX cannot be read as a valid EBML length with the
current code.
Co-authored-by: Steve Lhomme <robux4@ycbcr.xyz>
Co-authored-by: Dale Curtis <dalecurtis@chromium.org>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The function typedefs we were using are only present when using the
dynamic loader, which means compilation breaks for code directly
using the cuda SDK.
To fix this, let's just duplicate the function typedefs locally.
These are not going to change.