fc->ref points to an old VVCFrame, which cannot be used after frame_context_setup.
This prevents crashes in decode_nal_units-->ff_vvc_report_frame_finished.
Signed-off-by: Frank Plowman <post@frankplowman.com>
The avctx passed to avcodec_string() may have unset coded dimensions, as is the
case when called by av_dump_format() where the streams had all the needed
information at the container level, and as such no frames were decoded
internally.
Signed-off-by: James Almer <jamrial@gmail.com>
Besides simplifying address computations (it saves 432B of .text
in hevcdsp.o alone here) it also fixes undefined behaviour that
occurs if mx or my are 0 (happens when the filters are unused)
because they lead to an array index of -1 in the old code.
This happens in the checkasm-hevc_pel FATE-test.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Reviewed-by: Nuo Mi <nuomi2021@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Otherwise, passing an UNSPECIFIED frame to am MPEG-only filter graph
would trigger insertion of an unnecessary vf_scale filter, which would
perform a memcpy to convert between the two.
This is safe to do because unspecified YUV frames are already
universally assumed to be MPEG range, in particular by swscale.
Currently, this only affects untagged RGB/XYZ/Gray, which get forced to
their corresponding metadata before entering the filter graph. The main
justification for this change, however, is the planned ability to add
automatic promotion of unspecified yuv to mpeg range yuv.
Notably, this change will never allow accidentally cross-promoting
unspecified to jpeg or to a specific YUV matrix, since that is still
bound by the constraints of YUV range negotiation as set up by
query_formats.
Since 7bf1b9b357,
the test produces ordinary \n, yet this is not what the reference
file used for the most time, leading to test failures.
Reviewed-by: Martin Storsjö <martin@martin.st>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
When this filter overrides frame properties, the outgoing frames have
a different YUV colorspace than the incoming ones. This requires
signalling the new colorspace on the outlink, and in particular, making
sure it's *not* set to a common ref with the input - otherwise the point
of this filter would be destroyed.
Untouched fields will continue being passed through, so we don't need to
do anything there.
The currently used pointer when unmapping DXVA2 and D3D11
actually points to an OpenCLDeviceContext.
Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The existing (and upcoming) available group types are meant to combine several
streams for presentation, with the result being treated as if it was a stream
itself.
For example, a file could export two stream groups of the same type with one of
them as the "default".
Signed-off-by: James Almer <jamrial@gmail.com>
Previously, we produced output with either \r\n or mixed line endings.
This was undesirable unto itself, but also made working with patches affecting
FATE output particularly challenging, especially via the mailing list.
Everything that consumes the SSA/ASS format is line-ending-agnostic,
so \n is selected to simplify git/ML usage in FATE.
Extra \r characters at the end of a packet are dropped. These are always
ignored by the renderer anyway.
fbw_channels must be > 0 as teh code is only run if cpl_enabled is set and that requires mode >= AC3_CHMODE_STEREO
CID 718138 Uninitialized scalar variable
assumes this assert to be false
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
For a corrupted stream, the value of nalu_len read from the extradata is not reliable.
We need to perform additional checks
Fixes: fuzzer timeout
Fixes: 65253/clusterfuzz-testcase-minimized-ffmpeg_BSF_VVC_MP4TOANNEXB_fuzzer-4972412487467008
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
These chunks contain the Content Light Level Information and the
Mastering Display Color Volume information that FFmpeg already supports
as AVFrameSideData. This patch adds support for the png encoder to save
this metadata as the corresponding chunks in the PNG stream.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
These chunks contain the Content Light Level Information and the
Mastering Display Color Volume information that FFmpeg already supports
as AVFrameSideData. This patch adds support for the png decoder to read
these chunks if present and attach the corresponding side data to the
decoded frame.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
The function signature for bytestream2_seek is (gb, offset, whence);
Before this patch, the code passed (gb, SEEK_SET, offset), which is
incorrect.
Siged-off-by: Leo Izen <leo.izen@gmail.com>
vc1_hwaccel_pixfmt_list_420 is referenced even if
!(CONFIG_WMV3IMAGE_DECODER || CONFIG_VC1IMAGE_DECODER) so move it out
of the #if block.
Signed-off-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The channel designation metadata should not override the number of channels.
Let's warn the user if it is inconsistent, and keep the channel layout
unspecified.
Before the conversion to the channel layout API the code only set the mask, but
never overridden the channel count, so this restores the old behaviour.
Signed-off-by: Marton Balint <cus@passwd.hu>
Existing code could have caused wrong channel order signalling or reduced
channel count if a channel designation appeared multiple times. This is
actually an old bug, but the conversion to the new channel layout API made it
visible, because now the code overrides the proper channel count with the one
calculated from the mask.
Signed-off-by: Marton Balint <cus@passwd.hu>
The new API requires an extra array member at the very end,
which old API users did not do.
This disables in-place RDFT transforms and instead
does the transform out of place by copying once, there shouldn't
be a significant loss of speed as our in-place FFT requires a reorder
which is likely more expensive in the majority of cases to do.
These inline implementations of AV_COPY64, AV_SWAP64 and AV_ZERO64
are known to clobber the FPU state - which has to be restored
with the 'emms' instruction afterwards.
This was known and signaled with the FF_COPY_SWAP_ZERO_USES_MMX
define, which calling code seems to have been supposed to check,
in order to call emms_c() after using them. See
0b1972d409,
29c4c0886d and
df215e5758 for history on earlier
fixes in the same area.
However, new code can use these AV_*64() macros without knowing
about the need to call emms_c().
Just get rid of these dangerous inline assembly snippets; this
doesn't make any difference for 64 bit architectures anyway.
Signed-off-by: Martin Storsjö <martin@martin.st>
The previous assumption that DXV needs to be aligned to 16x16 was
erroneous. 4x4 works just as well, and FATE decoder tests pass for all
texture formats.
On the encoder side, we should reject input that isn't 4x4 aligned,
like the HAP encoder does, and stop aligning to 16x16. This both solves
the uninitialized reads causing current FATE tests to fail and produces
smaller encoded outputs.
With regard to correctness, I've checked the decoding path by encoding a
real-world sample with git master, and decoding it with
ffmpeg -i dxt1-master.mov -c:v rawvideo -f framecrc -
The results are exactly the same between master and this patch.
On the encoding side, I've encoded a real-world sample with both master
and this patch, and decoded both versions with
ffmpeg -i dxt1-{master,patch}.mov -c:v rawvideo -f framecrc -
Under this patch, results for both inputs are exactly the same.
In other words, the extra padding gained by 16x16 alignment over 4x4
alignment has no impact on decoded video.
Signed-off-by: Connor Worley <connorbworley@gmail.com>
Signed-off-by: Martin Storsjö <martin@martin.st>
SDL supports only these three matrices. Actually, it only supports these
three combinations: BT.601+JPEG, BT.601+MPEG, BT.709+MPEG, but we have
no way to restrict the specific *combination* of YUV range and YUV
colorspace with the current filter design.
See-Also: https://trac.ffmpeg.org/ticket/10839
Instead of an incorrect conversion result, trying to play a YCgCo file
with ffplay will simply error out with a "No conversion possible" error.
An oversight in my previous series. This omission slipped under the
radar because fftools/ffmpeg_filter.c did not use these options, instead
preferring to insert an explicit format filter.