For the purpose of merging streams in a stream group, a filtergraph can't be
created once we know it will be used. Therefore, allow filtergraphs to be
removed from the scheduler after being added.
Signed-off-by: James Almer <jamrial@gmail.com>
Several formats are being designed where more than one independent video or
audio stream within a container are part of what should be a single combined
output. This is the case for HEIF (Defining a grid where the decoded output
of several separate video streams are to be placed to generate a single output
image) and IAMF (Defining audio streams where channels are present in separate
coded stream).
AVStreamGroup was designed and implemented in libavformat to convey this
information, but the actual merging is left to the caller.
This change allows the FFmpeg CLI to take said information, parse it, and
create filtergraphs to merge the streams, making the combined output be usable
automatically further in the process.
Signed-off-by: James Almer <jamrial@gmail.com>
This keeps global and per frame side data clearly separated, and actually
propagates the former as it comes out from the buffersink filter.
Signed-off-by: James Almer <jamrial@gmail.com>
Certain parameters, like calculated framerate, are unavailable when connecting
the output of a filtergraph to the input of another.
This fixes command lines like
ffmpeg -lavfi "testsrc=rate=1:duration=1[out0]" -filter_complex "[out0]null[out1]" -map [out1] -y out.png
Signed-off-by: James Almer <jamrial@gmail.com>
Fixes: use of uninitialized memory
Fixes: 427814450/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MAGICYUV_DEC_fuzzer-646512196065689
Fixes: 445961558/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_UTVIDEO_DEC_fuzzer-5515158672965632
the multi vlc code will otherwise return uninitialized data. Now one can argue that this data should
not be used, but on errors this data can remain ...
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This would happen if one of the extended transfer characteristics is in
use (currently only AVCOL_TRC_V_LOG).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Panasonic cameras record ProRes RAW in V-Log/V-Gamut internally.
Atomos is a brand of recorders which can record uncompressed
non-debayered RAW over HDMI.
All known cameras output linear RAW over HDMI, so mark the transfer function
as linear in that case.
The parser API doesn't work with packets, only raw data, so in order for it to
be made aware of new extradata propagated through packet side data we need to
pass it in some other form, namely, replacing the main extradata and ensuring
it will be parsed by restarting the parser.
Signed-off-by: James Almer <jamrial@gmail.com>
If avctx->ch_layout is unset (as it's allowed and even expeced by the
AV_CODEC_CAP_CHANNEL_CONF flag), the code setting substream masks will fail for
stereo and mono layouts unless a downmix channel was requested.
Fix this by deriving the mask with coded values only.
Fixes issue #20764.
Signed-off-by: James Almer <jamrial@gmail.com>
The function name 'file_read' is too generic and conflicts with a function
of the same name in the NuttX kernel. Since NuttX links kernel and userspace
into a single binary, this causes a symbol collision when building FFmpeg tools.
Signed-off-by: caifan3 <caifan3@xiaomi.com>
We have no use for 14-bit pixel formats for now, so remove support for gray14,
which was broken due to the LSB padding issue.
Similarly YUVA at 10/12 bit was broken for the same reason.