Every plane of each slice has to contain at least two bytes for flags
and the type of prediction used.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Fixes: left shift of negative value -768
Fixes: 25574/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_DXTORY_fuzzer-6012596027916288
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: left shift of negative value -256
Fixes: 25460/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_DXTORY_fuzzer-5073252341514240
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: out of array read
Fixes: 25455/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_DXTORY_fuzzer-6327985731534848
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
AV1 decoder is supported on Tiger Lake+ platforms since libmfx 1.34
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Signed-off-by: Zhong Li <zhongli_dev@126.com>
Fixes: shift exponent -2 is negative
Fixes: 25683/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MOBICLIP_fuzzer-6434808492982272
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Since release 4.2, FFmpeg fails to detect the correct streams in an RTMP
stream that contains a |RtmpSampleAccess AMF object prior to the
onMetaData AMF object. In the debug log it would show "[flv] Unknown
type |RtmpSampleAccess".
This functionality broke in commit d7638d8dfc
as unknown metadata packets now result in an opaque data stream, and the
|RtmpSampleAccess packet was an "unknown" metadata packet type.
With this change the RTMP streams are correctly detected when there
is a |RtmpSampleAccess object prior to the onMetaData object.
Signed-off-by: Peter van der Spek <p.vanderspek@bluebillywig.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fix the issue: https://github.com/intel/media-driver/issues/317
the root cause is update_dimensions will be called multple times
when decoder thread number is not only 1, but update_dimensions
call get_pixel_format in each decode thread will trigger the
hwaccel_uninit/hwaccel_init more than once. But only one hwaccel
should be shared with all decode threads.
in current context,
there are 3 situations in the update_dimensions():
1. First time calling. No matter single thread or multithread,
get_pixel_format() should be called after dimensions were
set;
2. Dimention changed at the runtime. Dimention need to be
updated when macroblocks_base is already allocated,
get_pixel_format() should be called to recreate new frames
according to updated dimension;
3. Multithread first time calling. After decoder init, the
other threads will call update_dimensions() at first time
to allocate macroblocks_base and set dimensions.
But get_pixel_format() is shouldn't be called due to low
level frames and context are already created.
In this fix, we only call update_dimensions as need.
Signed-off-by: Wang, Shaofei <shaofei.wang@intel.com>
Reviewed-by: Jun, Zhao <jun.zhao@intel.com>
Reviewed-by: Haihao Xiang <haihao.xiang@intel.com>
Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>
As it was brought up that the current documentation leaves things
as specific to YCbCr only, ICtCp and RGB are now mentioned.
Additionally, the specifications on which these definitions of
narrow and full range are defined are mentioned.
This way, the documentation of AVColorRange should now match how
most people seem to read interpret it at this point, and thus
flagging RGB AVFrames as full range is valid not only according to
common sense, but also the enum definition.
The earlier code would first attempt to allocate two buffers, then
attempt to allocate an AVIOContext, using one of the new buffers I/O
buffer, then check the allocations. On success, a z_stream that is used
in the AVIOContext's read_packet callback is initialized afterwards.
There are two problems with this: In case the allocation of the I/O
buffer fails avio_alloc_context() will be given a NULL read buffer
with a size > 0. This works right now, but it is fragile. The second
problem is that the z_stream used in the read_packet callback is not
functional when avio_alloc_context() is allocated (it might be that
avio_alloc_context() might already fill the buffer in the future). This
commit fixes both of these problems by reordering the operations.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Causes some error as the ADPCM predictors aren't known, but
the difference is negligible and not audible.
Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
If the average bit rate cannot be calculated, such as in the case
of streamed fragmented mp4, utilize various available parameters
in priority order.
Tests are updated where the esds or btrt or ISML manifest boxes'
output changes.
This is utilized by various media ingests to figure out the bit
rate of the content you are pushing towards it, so write it for
video, audio and subtitle tracks in case at least one nonzero value
is available. It is only mentioned for timed metadata sample
descriptions in QTFF, so limit it only to ISOBMFF (MODE_MP4) mode.
Updates the FATE tests which have their results changed due to the
20 extra bytes being written per track.
for some cases (for example, super resolution), the DNN model changes
the frame size which impacts the filter behavior, so the filter needs
to know the out frame size at very beginning.
Currently, the filter reuses DNNModule.execute_model to query the
out frame size, it is not clear from interface perspective, so add
a new explict interface DNNModel.get_output for such query.
suppose we have a detect and classify filter in the future, the
detect filter generates some bounding boxes (BBox) as AVFrame sidedata,
and the classify filter executes DNN model for each BBox. For each
BBox, we need to crop the AVFrame, copy data to DNN model input and do
the model execution. So we have to save the in_frame at DNNModel.set_input
and use it at DNNModule.execute_model, such saving is not feasible
when we support async execute_model.
This patch sets the in_frame as execution_model parameter, and so
all the information are put together within the same function for
each inference. It also makes easy to support BBox async inference.
Currently, every filter needs to provide code to transfer data from
AVFrame* to model input (DNNData*), and also from model output
(DNNData*) to AVFrame*. Actually, such transfer can be implemented
within DNN module, and so filter can focus on its own business logic.
DNN module also exports the function pointer pre_proc and post_proc
in struct DNNModel, just in case that a filter has its special logic
to transfer data between AVFrame* and DNNData*. The default implementation
within DNN module is used if the filter does not set pre/post_proc.
Fixes: signed integer overflow: 2 * 2132811776 cannot be represented in type 'int'
Fixes: 25722/clusterfuzz-testcase-minimized-ffmpeg_IO_DEMUXER_fuzzer-6221704077246464
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: timeout (243sec -> a few ms)
Fixes: 25716/clusterfuzz-testcase-minimized-ffmpeg_IO_DEMUXER_fuzzer-5764093666131968
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: signed integer overflow: 6000 * -2147483648 cannot be represented in type 'int'
Fixes: 25700/clusterfuzz-testcase-minimized-ffmpeg_DEMUXER_fuzzer-6578316302352384
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
get_content_url() allocates two buffers for temporary strings and when
one of them couldn't be allocated, it simply returns, although one of
the two allocations could have succeeded and would leak in this
scenario. This can be fixed by avoiding one of the temporary buffers.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
1. Perform the necessary reindentations after the last few commits.
2. Adapt switches to the ordinary indentation style.
3. Now that the effective lifetimes of the variables containing
the freshly allocated strings used when parsing the representation
are disjoint, the variables can be replaced by a single variable.
Doing so has the advantage of making it more clear that these are
throwaway variables, hence it has been done.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
This allows to reduce the level of indentation for parsing the supported
representations (audio, video and subtitles). It also allows to avoid
some allocations and frees for unsupported representations.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
This commit removes two always-true checks as well as a dead default
case of a switch. The check when parsing manifests is always true,
because we now jump to the cleaning code in case the format of the
representation is unknown. The default case of the switch is dead,
because the type of the representation is already checked at the
beginning of parse_manifest_representation(). The check when reading
the header is dead, because we error out if an error happened before.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Up until now, the DASH demuxer used av_dynarray_add() to add
audio/video/subtitles representations to arrays. Yet av_dynarray_add()
frees the array upon failure, leading to leaks of its elements;
furthermore, the element to be added leaks, too.
This has been fixed by using av_dynarray_add_nofree() instead and by
freeing the elements that could not be added to the list. Furthermore,
errors from this are now checked and returned.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
These languages are normally freed after having been added as metadata
to their respective AVStreams. Yet if one never reaches said point, they
leak. This can happen as a result of an error when reading the header or
as a result of refreshing the manifests.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The DASH demuxer currently extracts several strings at once from an xml
document before processing them one by one; these strings are allocated,
stored in local variables and need to be freed by the demuxer itself.
So if an error happens when processing one of them, all strings need to
be freed before returning. This has simply not been done, leading to
leaks.
A simple fix would be to add the necessary code for freeing; yet there is
a better solution: Avoid having several strings at the same time by
extracting a string, processing it and immediately freeing it. That way
one only has to free at most one string on error.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
If parsing a representation fails, it is not added to the list of
representations and is therefore not freed in dash_close(); it therefore
leaked in most error paths in parse_manifest_representation() (some
error paths had (incomplete) code for freeing). This commit fixes
freeing the representation in this case.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>