Fixes: signed integer overflow: 256 * 668003712 cannot be represented in type 'int'
Fixes: 59819/clusterfuzz-testcase-minimized-ffmpeg_dem_MATROSKA_fuzzer-4674636538052608
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Parse through all NALUs in a packet, pull new ones when a complete AU could not
be assembled, or keep them around if an AU was assembled while data remained in
them.
Signed-off-by: James Almer <jamrial@gmail.com>
Default GOP size is now set by preset and tuning info. This GOP size is only overwriten if -g value is provided.
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
As per the spec:
VUID-VkVideoDecodeH264PictureInfoKHR-sliceCount-arraylength
sliceCount must be greater than 0
VUID-VkVideoDecodeH265PictureInfoKHR-sliceSegmentCount-arraylength
sliceSegmentCount must be greater than 0
This particularly happens with seeking in field-coded H264.
This commit scraps a bool to signal to recreate the session parameters,
but instead destroys them, forcing them to be recreated.
As this can happen between start_frame and end_frame, do this
at both places.
Move the slice offsets buffer to the thread decode context.
It isn't part of the resources for frame decoding, the driver
has to process and finish with it at submission time.
That way, it doesn't need to be alloc'd + freed on every frame.
This requires using the new AVHWFramesContext.opaque field, as
otherwise, the profile attached to the decoder will be freed
before the frames context, rendering the frames context useless.
But ensure the value returned by evc_read_nal_unit_length() fits in an int.
Should prevent integer overflows later in the code.
Signed-off-by: James Almer <jamrial@gmail.com>
Fixed-point AAC decoder currently does not produce the same output on
all platforms. Until that is fixed, silence the audio stream using the
volume filter.
Also, actually use the aac_fixed decoder as was the original intent.
Before the introduction of AV_CODEC_ID_TIMED_ID3 for timed_id3 metadata streams
in mpegts (commit 4a4437c0fb), AV_CODEC_ID_SMPTE_KLV
was the only existing codec for metadata.
It seems that this codec has a 5-bytes metadata header[1] that, for some reason,
was always skipped when decoding data packets.
However, when working with a AV_CODEC_ID_TIMED_ID3 streams, this results in the
5 first bytes of the payload being cut-off, which includes essential informations
such as the ID3 tag version.
This patch fixes the issue by keeping the 5-bytes skip only for AV_CODEC_ID_SMPTE_KLV
streams.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Fixes: out of array read
Fixes: 59828/clusterfuzz-testcase-minimized-ffmpeg_dem_JPEGXL_ANIM_fuzzer-5029813220671488
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Use the gcd of all input timebases to ensure PTS accuracy. For the
framerate, just pick the highest of all the inputs, under the assumption
that we will render frames with approximately this frequency. Of course,
this is not 100% accurate, in particular if the input frames are badly
misaligned. But this field is informational to begin with.
Importantly, it covers the "common" case of combining high FPS and low
FPS streams with aligned frames.
In the event that some frame mixes are OK while others are not, the
priority goes:
1. Errors in updating any frame -> return error
2. Any input incomplete -> request frames and return
3. Any inputs OK -> ignore EOF streams and render remaining inputs
4. No inputs OK -> set output to most recent status
This logic ensures that we can continue rendering the remaining streams,
no matter which streams reach their end of life, until we have no
streams left at which point we forward the last EOF.
When combining multiple inputs, the output PTS may be less than the PTS
of the input. In this case, the current's code assumption of always
draining one value from the FIFO is incorrect. Replace by a smarter
function which drains only those PTS values that were actually consumed.
When combining multiple inputs with different PTS and durations, in
input-timed mode, we emit one output frame for every input frame PTS,
from *any* input. So when combining a low FPS stream with a high FPS
stream, the output framerate would match the higher FPS, independent of
which order they are specified in.