Current code marks the output stream as finished and waits for a flush
packet, but that is both unnecessary and suspect, as in theory nothing
should be sent to a finished stream - not even flush packets.
Make all relevant state per-filtergraph input, rather than per-input
stream. Refactor the code to make it work and avoid leaking memory when
a single subtitle stream is sent to multiple filters.
Set them in ifilter_parameters_from_dec(), similarly to audio/video
streams. This reduces the extent to which sub2video filters need to be
treated specially.
This function should not take an InputStream, as it only uses it to get
the InputFile and the timebase. Pass those directly instead and avoid
confusion over dealing with multiple InputStreams.
This queue should be associated with a specific filtergraph input - if
a subtitle stream is sent to multiple filters then each should have its
own queue.
This code is a sub2video analogue of ifilter_send_frame(), so it
properly belongs to the filtering code.
Note that using sub2video with more than one target for a given input
subtitle stream is currently broken and this commit does not change
that. It will be addressed in following commits.
When the filtergraph has no inputs, it can be configured immediately
when all its outputs are bound to output streams. This will simplify
treating some corner cases.
This way the list of filtergraph inputs/outputs is always known after
FilterGraph creation. This will allow treating simple and complex
filtergraphs in a more uniform manner.
Currently NULL would be passed for simple filtergraphs, which would
make the filter code extract the graph description from the output
stream when needed. This is unnecessarily convoluted.
A non-limiting stream could mistakenly end up being the queue head,
which would then produce incorrect synchronization, seen e.g. in
fate-matroska-flac-extradata-update for certain number of frame threads
(e.g. 5).
Found-By: James Almer
Checking whether the user requested an unsupported conversion between
text and bitmap subtitles can be done immediately when creating the
output stream.
This function is entangled with encoder setup, so it is more encoding
code rather than ffmpeg_hw code. This will allow making more encoder
state private in the future.
This function is entangled with decoder setup, so it is more decoding
code rather than ffmpeg_hw code. This will allow making more decoder
state private in the future.
As the comment in the code mentions, EAGAIN is not an expected value here
because we call avcodec_receive_frame() until all frames have been returned.
avcodec_send_packet() returning EAGAIN means a packet is still buffered, which
hints that the underlying decoder is buggy and not fetching packets as it
should.
An example of this behavior was in the libdav1d wrapper before f209614290,
where feeding it split frames (or individual OBUs) would result in the CLI
eventually printing the confusing "Error submitting packet to decoder: Resource
temporarily unavailable" error message, and just keep going until EOF without
returning new frames.
Signed-off-by: James Almer <jamrial@gmail.com>
It currently emulates the long-removed
avcodec_decode_audio4/avcodec_decode_video2 APIs, which obfuscates the
actual decoding flow. Restructure the decoding calls so that they
naturally follow the new avcodec_send_packet()/avcodec_receive_frame()
design.
This is not only significantly easier to read, but also shorter.
It is currently handled in the same loop as audio and video, but this
obscures the actual flow, because only one iteration is ever performed
for subtitles.
Also, avoid a pointless packet reference.