Fixes: signed integer overflow: -1948269928 * 10 cannot be represented in type 'int'
Fixes: 49451/clusterfuzz-testcase-minimized-ffmpeg_dem_SUBVIEWER_fuzzer-6344614822412288
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Fixes: out of array read
Fixes: 49434/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_TIFF_fuzzer-5208501080686592
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Don't silently replace it with the default layout for the amount of channels
from the requested layout.
Should fix ticket #9869
Signed-off-by: James Almer <jamrial@gmail.com>
Fixes FATE-failures with the the filter-2xbr filter-3xbr filter-4xbr
filter-ep2x filter-ep3x filter-hq2x filter-hq3x filter-hq4x
filter-paletteuse-bayer filter-paletteuse-bayer0
filter-paletteuse-nodither and filter-paletteuse-sierra2_4a tests
when using 32bit x86 with CPUFLAGS ranging from "mmx+mmxext" to
"mmx+mmxext+sse+sse2+sse3" (the relevant function is only overwritten
when using SSSE3).
Reviewed-by: Lynne <dev@lynne.ee>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
By checking immediately whether the first allocation was successfull
one can simplify the cleanup code in case of errors.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
APNG works with a single reference frame and an output frame.
According to the spec, decoding APNG works by decoding
the current IDAT/fdAT chunks (which decodes to a rectangular
subregion of the whole image region), followed by either
overwriting the region of the output frame with the newly
decoded data or by blending the newly decoded data with
the data from the reference frame onto the current subregion
of the output frame. The remainder of the output frame
is just copied from the reference frame.
Then the reference frame might be left untouched
(APNG_DISPOSE_OP_PREVIOUS), it might be replaced by the output
frame (APNG_DISPOSE_OP_NONE) or the rectangular subregion
corresponding to the just decoded frame has to be reset
to black (APNG_DISPOSE_OP_BACKGROUND).
The latter case is not handled correctly by our decoder:
It only performs resetting the rectangle in the reference frame
when decoding the next frame; and since commit
b593abda6c it does not reset
the reference frame permanently, but only temporarily (i.e.
it only affects decoding the frame after the frame with
APNG_DISPOSE_OP_BACKGROUND). This is a problem if the
frame after the APNG_DISPOSE_OP_BACKGROUND frame uses
APNG_DISPOSE_OP_PREVIOUS, because then the frame after
the APNG_DISPOSE_OP_PREVIOUS frame has an incorrect reference
frame. (If it is not followed by an APNG_DISPOSE_OP_PREVIOUS
frame, the decoder only keeps a reference to the output frame,
which is ok.)
This commit fixes this by being much closer to the spec
than the earlier code: Resetting the background is no longer
postponed until the next frame; instead it is applied to
the reference frame.
Fixes ticket #9602.
(For multithreaded decoding it was actually already broken
since commit 5663301560d77486c7f7c03c1aa5f542fab23c24.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Some code in alimiter assumes that there are 2 channels, resulting in
clipping if the loudest channel is 3 or above and an out-of-bounds read if
the input is monophonic. Fix that in 2 places.
Signed-off-by: David Flater <dave@flaterco.com>
Add adaptive_i/b feature to hevc_qsv. Adaptive_i allows changing of
frame type from P and B to I. Adaptive_b allows changing of frame type
frome B to P.
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
According to its documentation it returns "pts of the last muxed packet
+ its duration", but the value it actually returns right now is
(possibly guessed) dts after muxer-internal bitstream filtering (if
any).
This function was added for ffmpeg.c, but it is not used there anymore.
Since the value it returns is ill-defined and so inappropriate for any
serious use, deprecate it.
Just return AVERROR_EXTERNAL immediately when encode error.
The other logic should keep the old behavior before commit 7c05b7951.
Suggested-By: Zhao Zhili <zhilizhao@tencent.com>
Signed-off-by: Steven Liu <lq@chinaffmpeg.org>
This commit moves the encoder-only allocations of the tables
owned solely by the main encoder context to mpegvideo_enc.c.
This avoids checks in mpegvideo.c for whether we are dealing
with an encoder; it also improves modularity (if encoders are
disabled, this code will no longer be included in the binary).
And it also avoids having to reset these pointers at the beginning
of ff_mpv_common_init() (in case the dst context is uninitialized,
ff_mpeg_update_thread_context() simply copies the src context
into dst which therefore may contain pointers not owned by it,
but this does not happen for encoders at all).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Fixes frame-threaded decoding of samples created by concatting
a video with data partitioning and a video not using it.
(Only the MPEG-4 decoder sets this, so it is synced in
mpeg4_update_thread_context() despite being a MpegEncContext-field.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This is by no means perfect, since at least ddagrab will return scRGB
data with values outside of 0.0f to 1.0f for HDR values.
Its primary purpose is to be able to work with the format at all.
_Float16 support was available on arm/aarch64 for a while, and with gcc
12 was enabled on x86 as long as SSE2 is supported.
If the target arch supports f16c, gcc emits fairly efficient assembly,
taking advantage of it. This is the case on x86-64-v3 or higher.
Same goes on arm, which has native float16 support.
On x86, without f16c, it emulates it in software using sse2 instructions.
This has shown to perform rather poorly:
_Float16 full SSE2 emulation:
frame=50074 fps=848 q=-0.0 size=N/A time=00:33:22.96 bitrate=N/A speed=33.9x
_Float16 f16c accelerated (Zen2, --cpu=znver2):
frame=50636 fps=1965 q=-0.0 Lsize=N/A time=00:33:45.40 bitrate=N/A speed=78.6x
classic half2float full software implementation:
frame=49926 fps=1605 q=-0.0 Lsize=N/A time=00:33:17.00 bitrate=N/A speed=64.2x
Hence an additional check was introduced, that only enables use of
_Float16 on x86 if f16c is being utilized.
On aarch64, a similar uplift in performance is seen:
RPi4 half2float full software implementation:
frame= 6088 fps=126 q=-0.0 Lsize=N/A time=00:04:03.48 bitrate=N/A speed=5.06x
RPi4 _Float16:
frame= 6103 fps=158 q=-0.0 Lsize=N/A time=00:04:04.08 bitrate=N/A speed=6.32x
Since arm/aarch64 always natively support 16 bit floats, it can always
be considered fast there.
I'm not aware of any additional platforms that currently support
_Float16. And if there are, they should be considered non-fast until
proven fast.
IEEE-754 differentiates two different kind of NaNs.
Quiet and Signaling ones. They are differentiated by the MSB of the
mantissa.
For whatever reason, actual hardware conversion of half to single always
sets the signaling bit to 1 if the mantissa is != 0, and to 0 if it's 0.
So our code has to follow suite or fate-testing hardware float16 will be
impossible.
This avoids triggering overflows in the filters, and avoids stray
test failures in the approximate functions on x86; due to rounding
differences, one implementation might overflow while another one
doesn't.
Signed-off-by: Martin Storsjö <martin@martin.st>
Some muxers, such as GPAC, create files with only one sidx, but two streams
muxed into the same fragments pointed to by this sidx.
Prevously, in such a case, when we seeked in such files, we fell back
to, for example, using the sidx associated with the video stream, to
seek the audio stream, leaving the seekhead in the wrong place.
We can still do this, but we need to take care to compare timestamps
in the same time base.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
They are already synced generically in update_context_from_thread()
in pthread_frame.c.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It is unused since 3575a495f6
(and the error message is dangerous: av_get_pix_fmt_name(format)
returns NULL iff av_pix_fmt_desc_get(format) returns NULL
and using a NULL string for %s would be UB).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Improves the grepability of the code.
(Furthermore, I hope that no compiler will really call memset
for 28 bytes.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It is done later in ff_mpv_frame_start() (and nobody uses
current_picture_ptr between setting it in ff_mpv_frame_start()).
(The reason the vsynth*-h263-obmc ref files change is because
the call to ff_find_unused_picture() now happens after the older
pictures have been unreferenced in ff_mpv_frame_start(),
so that their slots in the picture array can be immediately
reused; the obmc code is somehow buggy and changes its output
depending on the earlier contents of the motion_val buffer.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>