ffmpeg CLI pixel format selection for filtering currently special-cases
MJPEG encoding, where it will restrict the supported list of pixel
formats depending on the value of the -strict option. In order to get
that value it will apply it from the options dict into the encoder
context, which is a highly invasive action even now, and would become a
race once encoding is moved to its own thread.
The ugliness of this code can be much reduced by moving the special
handling of MJPEG into ofilter_bind_ost(), which is called from encoder
init and is thus synchronized with it. There is also no need to write
anything to the encoder context, we can evaluate the option into our
stack variable.
There is also no need to access AVCodec at all during pixel format
selection, as the pixel formats array is already stored in
OutputFilterPriv.
When -pix_fmt designates a BE/LE pixel format, it gets translated into
the native one by av_get_pix_fmt(). This may not always be the best
choice, as the encoder might only support one endianness. In such a
case, explicitly choose the endianness supported by the encoder.
While this is currently redundant with choose_pixel_fmt() in
ffmpeg_filter.c, the latter code will be deprecated in following commits.
frame is always != NULL for audio and video here
(this is checked by an assert and the frame is already dereferenced
before it reaches this code here).
Fixes Coverity issue #1538858.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
map_func is supposed to be an array of const pointer to function
returning int, not an array of pointer to function returning const int.
Reported-By: Martin Storsjö
Read the timebase from FrameData rather than the input stream. This
should fix#10393 and generally be more reliable.
Replace the use of '-1' to indicate demuxing timebase with the string
'demux'. Also allow to request filter timebase with
'-enc_time_base filter'.
It now contains data from multiple sources, so group those items that
always come from the decoder. Also, initialize them to invalid values,
so that frames that did not originate from a decoder can be
distinguished.
This is possible now that enc_open() is always called with a non-NULL
frame for audio/video.
Previously the code would directly reach into the buffersink, which is a
layering violation.
When no frames were passed from a filtergraph to an encoder, but the
filtergraph is configured (i.e. has output parameters), encoder flush
code will use those parameters to initialize the encoder in a last-ditch
effort to produce some useful output.
Rework this process so that it is triggered by the filtergraph, which
now sends a dummy frame with parameters, but no data, to the encoder,
rather than the encoder reaching backwards into the filter.
This approach is more in line with the natural data flow from filters to
encoders and will allow to reduce encoder-filter interactions in
following commits.
This code is tested by fate-adpcm-ima-cunning-trunc-t2-track1, which (as
confirmed by Zane) is supposed to produce empty output.
This line was added in c30a4489b4 along with
AVStream.sample_aspect_ratio. However, configuring SAR for video
encoding is now done after this code (specifically in enc_open(), which
is called when the first video frame to be encoded is obtained), so this
line cannot have any meaningful effect.
Replace duplicated(!) and broken* custom string parsing with
av_dict_parse_string(). Return error codes instead of aborting.
* e.g. it treats NULL returned from av_get_token() as "separator not
found", when in fact av_get_token() only returns NULL on memory
allocation failure
EOF from sq_receive() means no packets will ever be output by the sync
queue. Since the muxing sync queue is always used by all interleaved
(i.e. non-attachment) streams, this means no further packets can make
it to the muxer and we can terminate muxing now.
Also fix a couple of possible overflows while at it.
Fixes the negative initial timestamps in ticket #10358.
Signed-off-by: Marton Balint <cus@passwd.hu>
Not sure if the function naming frame_queue_destory is intended because
"destory" is not really a word. Changing it to "destroy" makes more sense.
Signed-off-by: QiTong Li <liqitong@163.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
This is only a preparatory step to a fully threaded architecture and
does not yet make decoding truly parallel - the main thread will
currently submit a packet and wait until it has been fully processed by
the decoding thread before moving on. Decoder behavior as observed by
the rest of the program should remain unchanged. That will change in
future commits after encoders and filters are moved to threads and a
thread-aware scheduler is added.
It is only used for flushing the subtitle decoder, so allocate a
dedicated packet for that.
Keep Decoder.pkt unused for now, it will be repurposed in future
commits.
Make the function process just one input stream at a time and save an
indentation level. Also rename it to ist_add() to be consistent with an
analogous function in ffmpeg_mux_init.
Currently of_output_packet() reuses the input packet, which requires its
callers to submit blank packets even on EOF, which makes the code more
complex.
It is set by the muxing code, which will not be synchronized with
encoding code after upcoming threading changes. Use an encoder-private
variable instead.
Packets submitted to the muxer now have their timebase attached to them,
so the muxer can do conversion to muxing timebase and avoid exposing it
to callers.
There is no reason to postpone it until opening the encoder. Also, abort
when the input stream is unknown, rather than disregard an explicit
request from the user.
The code will currently add a small offset to avoid exact midpoints, but
this can cause inexact results when a float timestamp is exactly
representable as an integer.
Fixes off-by-one in the first frame duration in multiple FATE tests.
Current code marks the output stream as finished and waits for a flush
packet, but that is both unnecessary and suspect, as in theory nothing
should be sent to a finished stream - not even flush packets.
Make all relevant state per-filtergraph input, rather than per-input
stream. Refactor the code to make it work and avoid leaking memory when
a single subtitle stream is sent to multiple filters.
Set them in ifilter_parameters_from_dec(), similarly to audio/video
streams. This reduces the extent to which sub2video filters need to be
treated specially.
This function should not take an InputStream, as it only uses it to get
the InputFile and the timebase. Pass those directly instead and avoid
confusion over dealing with multiple InputStreams.
This queue should be associated with a specific filtergraph input - if
a subtitle stream is sent to multiple filters then each should have its
own queue.
This code is a sub2video analogue of ifilter_send_frame(), so it
properly belongs to the filtering code.
Note that using sub2video with more than one target for a given input
subtitle stream is currently broken and this commit does not change
that. It will be addressed in following commits.
When the filtergraph has no inputs, it can be configured immediately
when all its outputs are bound to output streams. This will simplify
treating some corner cases.
This way the list of filtergraph inputs/outputs is always known after
FilterGraph creation. This will allow treating simple and complex
filtergraphs in a more uniform manner.
Currently NULL would be passed for simple filtergraphs, which would
make the filter code extract the graph description from the output
stream when needed. This is unnecessarily convoluted.
A non-limiting stream could mistakenly end up being the queue head,
which would then produce incorrect synchronization, seen e.g. in
fate-matroska-flac-extradata-update for certain number of frame threads
(e.g. 5).
Found-By: James Almer
Checking whether the user requested an unsupported conversion between
text and bitmap subtitles can be done immediately when creating the
output stream.
This function is entangled with encoder setup, so it is more encoding
code rather than ffmpeg_hw code. This will allow making more encoder
state private in the future.
This function is entangled with decoder setup, so it is more decoding
code rather than ffmpeg_hw code. This will allow making more decoder
state private in the future.
As the comment in the code mentions, EAGAIN is not an expected value here
because we call avcodec_receive_frame() until all frames have been returned.
avcodec_send_packet() returning EAGAIN means a packet is still buffered, which
hints that the underlying decoder is buggy and not fetching packets as it
should.
An example of this behavior was in the libdav1d wrapper before f209614290,
where feeding it split frames (or individual OBUs) would result in the CLI
eventually printing the confusing "Error submitting packet to decoder: Resource
temporarily unavailable" error message, and just keep going until EOF without
returning new frames.
Signed-off-by: James Almer <jamrial@gmail.com>
It currently emulates the long-removed
avcodec_decode_audio4/avcodec_decode_video2 APIs, which obfuscates the
actual decoding flow. Restructure the decoding calls so that they
naturally follow the new avcodec_send_packet()/avcodec_receive_frame()
design.
This is not only significantly easier to read, but also shorter.
It is currently handled in the same loop as audio and video, but this
obscures the actual flow, because only one iteration is ever performed
for subtitles.
Also, avoid a pointless packet reference.
process_input_packet() contains two non-interacting pieces of nontrivial
size and complexity - decoding and streamcopy. Separating them makes the
code easier to read.
New placement requires fewer explicit conditions and is easier to
understand.
The logic should be exactly equivalent, since this is the only place
where eof_reached is set for decoding.
Passing ist=NULL is currently used to identify stream types that do not
decode into AVFrames, i.e. subtitles. That is highly non-obvious -
always pass a non-NULL InputStream and just check the type explicitly.
It tracks whether the decoder for this stream ever produced any frames
and its only use is for checking whether a filter input ever received a
frame - those that did not are prioritized by the scheduler.
This is awkward and unnecessarily complicated - checking whether the
filtergraph input format is valid works just as well and does not
require maintaining an extra variable.
In case no decoder is available, dec_open() called from ist_use() will
fail with 'Decoding requested, but no decoder found', so this check is
redundant.
When a filtergraph input receives EOF but never saw any input frames, we
use the fallback parameters. Currently an attempt to actually configure
the filtergraph will happen elsewhere, but there is no reason to
postpone this.
With complex filtergraphs it can happen that the filtergraph is
unconfigured because some other filter than the one we just got EOF on
is missing parameters.
Make sure that the fallback parametes for a given input are only used
when that input is unconfigured.
It does no initialization anymore, except for setting
transcode_init_done - the bulk of the function is printing the
input/output maps. It also cannot fail anymore, so remove the useless
return value.
Export the corresponding flag in InputFile instead. This will allow
making the demuxer AVFormatContext private in future commits, similarly
to what was previously done for muxers.
There is no point in having a per-stream wallclock start time, since
they are all computed at the same instant. Keep a per-file start time
instead, initialized when the demuxer thread starts.
That is a more appropriate place for this code and will allow hiding
more of InputStream.
The value of repeat_pict extracted from libavformat internal parser no
longer needs to be trasmitted outside of the demuxing thread.
Move readrate handling to the demuxer thread. This has to be done in the
same commit, since it reads InputStream.dts,nb_packets, which are now
set in the demuxer thread.
This way computing it and using it for streamcopy does not need to
happen in sync. Will be useful in following commits, where updating
InputStream.dts will be moved to the demuxing thread.
This code runs post-demuxing and is not synchronized with the decoder
output (which may be delayed with respect to its input by arbitrary and
unknowable amounts), so accessing any decoder properties is incorrect.
Move them to a separate function called right after timestamp
discontinuity processing. This is now possible, since these values have
no interaction with decoding anymore.
The two checks using eof_reached are testing whether more input can
possibly appear on this filtergraph input. InputFilterPriv.eof is the
more authoritative source for this information.