Do not return the return value of the last enc_send_to_dst()
call, as this would treat the last call differently from the
earlier calls; furthermore, sch_enc_send() explicitly documents
to always return 0 on success.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
There is no need to free the already-added items, they will be freed
alongside the codec context. There is also little point in an error
message, as the only reason this can fail is malloc failure.
MATCH_PER_STREAM_OPT iterates over all options of a given
OptionDef and tests whether they apply to the current stream;
if so, they are set to ost->apad, otherwise, the code errors
out. If no error happens, ost->apad is av_strdup'ed in order
to take ownership of this pointer.
But this means that setting it originally was premature,
as it leads to double-frees when an error happens lateron.
This can simply be reproduced with
ffmpeg -filter_complex anullsrc -apad bar -apad:n baz -f null -
This is a regression since 83ace80bfd.
Fix this by using a temporary variable instead of directly
setting ost->apad. Also only strdup the string if it actually
is != NULL.
Reviewed-by: Marth64 <marth64@proxyid.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Since e0da916b8f the ffmpeg utility has
held multiple frames output by the decoder in internal queues without
telling the decoder that it is going to do so. When the decoder has a
fixed-size pool of frames (common in some hardware APIs where the output
frames must be stored as an array texture) this could lead to the pool
being exhausted and the decoder getting stuck. Fix this by telling the
decoder to allocate additional frames according to the queue size.
OptionDef.u is only an offset (i.e. its off member) iff OPT_FLAG_OFFSET
is true. Otherwise, the pointer arithmetic can be undefined behaviour.
UBSan warns about this (on 32bit arches):
src/fftools/ffmpeg_opt.c:102:15: runtime error: pointer index expression with base 0xffa4db10 overflowed to 0x56059a50
This commit fixes this by checking for OPT_FLAG_OFFSET first.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This allows using WRAPPED_AVFRAME encoders with loopback decoders in
order to connect multiple filtergraphs together.
Clear the flag in muxers, since lavf does not need it for anything and
it would change the results of framecrc FATE tests.
Encoder timebase is equal to the frame timebase, so does not need to be
passed separately.
Also, rename in_picture to frame, which is shorter and more accurate -
it always contains a frame, never a field.
These functions used to be passed directly to pthread_create(), which
required them to return void*. This is no longer the case, so they can
return a plain int.
Current callstack looks like this:
* ifilter_bind_ist() (filter) calls ist_filter_add() (demuxer);
* ist_filter_add() opens the decoder, and then calls
dec_add_filter() (decoder);
* dec_add_filter() calls ifilter_parameters_from_dec() (i.e. back into
the filtering code) in order to give post-avcodec_open2() parameters
to the filter.
This is unnecessarily complicated. Pass the parameters as follows
instead:
* dec_init() (which opens the decoder) returns post-avcodec_open2()
parameters to its caller (i.e. the demuxer) in a parameter-only
AVFrame
* the demuxer passes these parameters to the filter in
InputFilterOptions, together with other filter options
The first of these binds inputs of complex filtergraphs to demuxer
streams (with a misleading comment claiming it *creates* complex
filtergraphs).
The second ensures that all filtergraph outputs are connected to an
encoder.
Merge them into a single function, which simplifies the ffmpeg_filter
API, is shorter, and will also be useful in following commits.
Also, rename misleadingly-named init_input_filter() to
fg_complex_bind_input().
Rename dec_open to dec_init(), as it is more descriptive of its new
purpose.
Will be useful in following commits, which will add a new path for
opening decoders.
Treat it analogously to stream parameters like format/dimensions/etc.
This is functionally different from previous code in 2 ways:
* for non-CFR video, the frame timebase (set by the decoder) is used
rather than the demuxer timebase
* for sub2video, AV_TIME_BASE_Q is used, which is hardcoded by the
subtitle decoding API
These changes should avoid unnecessary and potentially lossy timestamp
conversions from decoder timebase into the demuxer one.
Changes the timebases used in sub2video tests.
Apparently it can happen that avfilter_graph_request_oldest() returns
EAGAIN, yet av_buffersrc_get_nb_failed_requests() returns 0 for every
input.
Works around #10795, though the root issue is most likely in the
scale2ref filter.
Components and pieces are side data specific fields and there's a variable
amount of them.
They also need to be identified in some form, so print a type too.
Reviewed-by: Stefano Sabatini <stefasab@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
It is only used by ffprobe (once) and ffplay (twice);
inlining it avoids including it unnecessarily into ffmpeg.
Reviewed-by: Stefano Sabatini <stefasab@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Do not construct the name manually from input file/stream indices.
This is a step towards avoiding the assumption that filtergraph inputs
are always fed by demuxers.
The computation is based on demuxer properties, so that is the more
appropriate place for it. Filter code just receives the desired
start time/duration.
The filename is freed with the OptionsContext and therefore
there will be a use-after-free when reporting the filename
in print_stream_maps(). So create a copy of the string.
This is a regression since 8aed3911fc.
fate-lavf-mkv_attachment exhibits it (and reports a random nonsense
filename here), but this does not make the test fail (not even with
valgrind; only with ASAN, as it aborts on use-after-free).
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Makes it robust against adding fields before it, which will be useful in
following commits.
Majority of the patch generated by the following Coccinelle script:
@@
typedef AVOption;
identifier arr_name;
initializer list il;
initializer list[8] il1;
expression tail;
@@
AVOption arr_name[] = { il, { il1,
- tail
+ .unit = tail
}, ... };
with some manual changes, as the script:
* has trouble with options defined inside macros
* sometimes does not handle options under an #else branch
* sometimes swallows whitespace
SDL supports only these three matrices. Actually, it only supports these
three combinations: BT.601+JPEG, BT.601+MPEG, BT.709+MPEG, but we have
no way to restrict the specific *combination* of YUV range and YUV
colorspace with the current filter design.
See-Also: https://trac.ffmpeg.org/ticket/10839
Instead of an incorrect conversion result, trying to play a YCgCo file
with ffplay will simply error out with a "No conversion possible" error.
This commit lets ffplay properly propagate YUV metadata into the filter
graph, avoiding such issues as e.g. accidentally passing YCgCo into a
filter that can't support it. Also fixes an error related to this
missing metadata from buffersrc (since commit 2d555dc82d)
See-Also: https://trac.ffmpeg.org/ticket/10839
Previously, the demuxer would register decoder with the scheduler, using
InputStream as opaque, and pass the scheduling index to the decoder.
Now the registration is done by the decoder itself, using DecoderPriv as
opaque, and the scheduling index is returned to demuxer from dec_open().
decoder_thread() then no longer needs to be accessed from outside of
ffmpeg_dec and can be made static.
It is done based on demuxer information, so that is the more appropriate
place for this code.
This is a step towards decoupling Decoder and InputStream.
* as this decision is based on demuxing information, move it from the
decoder to the demuxer
* as the issue being addressed is latency added by frame threading, we
only need to disable frame threading, not all threading