Change the main loop and every component (demuxers, decoders, filters,
encoders, muxers) to use the previously added transcode scheduler. Every
instance of every such component was already running in a separate
thread, but now they can actually run in parallel.
Changes the results of ffmpeg-fix_sub_duration_heartbeat - tested by
JEEB to be more correct and deterministic.
See the comment block at the top of fftools/ffmpeg_sched.h for more
details on what this scheduler is for.
This commit adds the scheduling code itself, along with minimal
integration with the rest of the program:
* allocating and freeing the scheduler
* passing it throughout the call stack in order to register the
individual components (demuxers/decoders/filtergraphs/encoders/muxers)
with the scheduler
The scheduler is not actually used as of this commit, so it should not
result in any change in behavior. That will change in future commits.
* the code is made shorter and simpler
* avoids constantly allocating and freeing AVPackets, thanks to
ThreadQueue integration with ObjPool
* is consistent with decoding/filtering/muxing
* reduces the diff in the future switch to thread-aware scheduling
This makes ifile_get_packet() always block. Any potential issues caused
by this will be resolved by the switch to thread-aware scheduling in
future commits.
Current code tracks min/max pts for each stream separately; then when
the file ends it combines them with last frame's duration to compute the
total duration of each stream; finally it selects the longest of those
durations as the file duration.
This is incorrect - the total file duration is the largest timestamp
difference between any frames, regardless of the stream.
Also change the way the last frame information is reported from decoders
to the muxer - previously it would be just the last frame's duration,
now the end timestamp is sent, which is simpler.
Changes the result of the fate-ffmpeg-streamloop-transcode-av test,
where the timestamps are shifted slightly forward. Note that the
matroska demuxer does not return the first audio packet after seeking
(due to buggy interaction betwen the generic code and the demuxer), so
there is a gap in audio.
An AVFormatContext leaks on errors that happen before it is attached
to its permanent place (an InputFile). Fix this by attaching
it earlier.
Given that it is not documented that avformat_close_input() is usable
with an AVFormatContext that has only been allocated with
avformat_alloc_context() and not opened with avformat_open_input(),
one error path before avformat_open_input() had to be treated
specially: It uses avformat_free_context().
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The av_opt_eval family of functions emits errors messages on error
and can therefore not be used with fake objects when the AVClass
has a custom item_name callback. The AVClass for AVCodecContext
has such a custom callback (it searches whether an AVCodec is set
to use its name). In practice it means that whatever is directly
after the "cc" pointer to the AVClass for AVCodec in the stack frame
of ist_add() will be treated as a pointer to an AVCodec with
unpredictable consequences.
Fix this by using an actual AVCodecContext instead of a fake object.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This is no longer needed as the side data is available for decoders in the
AVCodecContext.
The tests affected reflect the removal of useless CPB and Stereo 3D side
data in packets.
Signed-off-by: James Almer <jamrial@gmail.com>
It is badly named (should have been -top_field_first, or at least -tff),
underdocumented and underspecified, and (most importantly) entirely
redundant with the setfield filter.
Make the function process just one input stream at a time and save an
indentation level. Also rename it to ist_add() to be consistent with an
analogous function in ffmpeg_mux_init.
Make all relevant state per-filtergraph input, rather than per-input
stream. Refactor the code to make it work and avoid leaking memory when
a single subtitle stream is sent to multiple filters.
Set them in ifilter_parameters_from_dec(), similarly to audio/video
streams. This reduces the extent to which sub2video filters need to be
treated specially.