The decision whether -apad actually does anything is made based on muxer
properties, and so more properly belongs there. Filtering code only
receives the result.
This allows using WRAPPED_AVFRAME encoders with loopback decoders in
order to connect multiple filtergraphs together.
Clear the flag in muxers, since lavf does not need it for anything and
it would change the results of framecrc FATE tests.
These functions used to be passed directly to pthread_create(), which
required them to return void*. This is no longer the case, so they can
return a plain int.
The filename is freed with the OptionsContext and therefore
there will be a use-after-free when reporting the filename
in print_stream_maps(). So create a copy of the string.
This is a regression since 8aed3911fc.
fate-lavf-mkv_attachment exhibits it (and reports a random nonsense
filename here), but this does not make the test fail (not even with
valgrind; only with ASAN, as it aborts on use-after-free).
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Change the main loop and every component (demuxers, decoders, filters,
encoders, muxers) to use the previously added transcode scheduler. Every
instance of every such component was already running in a separate
thread, but now they can actually run in parallel.
Changes the results of ffmpeg-fix_sub_duration_heartbeat - tested by
JEEB to be more correct and deterministic.
See the comment block at the top of fftools/ffmpeg_sched.h for more
details on what this scheduler is for.
This commit adds the scheduling code itself, along with minimal
integration with the rest of the program:
* allocating and freeing the scheduler
* passing it throughout the call stack in order to register the
individual components (demuxers/decoders/filtergraphs/encoders/muxers)
with the scheduler
The scheduler is not actually used as of this commit, so it should not
result in any change in behavior. That will change in future commits.
The word "monotonous" means "spoken in a monotone" which is not what we
mean here. We mean "monotonic" i.e. nondecreasing.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
This function converts packet timestamps from the input stream timebase
to OutputStream.mux_timebase, which may or may not be equal to the
actual output AVStream timebase (and even when it is, this may not
always be the optimal choice due to bitstream filtering).
Just keep the timestamps in input stream timebase, they will be rescaled
as needed before bitstream filtering and/or sending the packet to the
muxer.
Move the av_rescale_delta() call for audio (needed to preserve accuracy
with coarse demuxer timebases) to write_packet.
Drop now-unused OutputStream.mux_timebase.
Bitstream filtering input timebase is not always necessarily equal to
OutputStream.mux_timebase. Also, set AVPacket.time_base correctly for
packets output by bitstream filters
Do not rescale at all in of_output_packet() when not doing bitstream
filtering, as it's unnecessary - write_packet() will rescale to the
actual muxer timebase.
EOF from sq_receive() means no packets will ever be output by the sync
queue. Since the muxing sync queue is always used by all interleaved
(i.e. non-attachment) streams, this means no further packets can make
it to the muxer and we can terminate muxing now.
This is only a preparatory step to a fully threaded architecture and
does not yet make decoding truly parallel - the main thread will
currently submit a packet and wait until it has been fully processed by
the decoding thread before moving on. Decoder behavior as observed by
the rest of the program should remain unchanged. That will change in
future commits after encoders and filters are moved to threads and a
thread-aware scheduler is added.
Currently of_output_packet() reuses the input packet, which requires its
callers to submit blank packets even on EOF, which makes the code more
complex.
Packets submitted to the muxer now have their timebase attached to them,
so the muxer can do conversion to muxing timebase and avoid exposing it
to callers.
Current code marks the output stream as finished and waits for a flush
packet, but that is both unnecessary and suspect, as in theory nothing
should be sent to a finished stream - not even flush packets.
Currently NULL would be passed for simple filtergraphs, which would
make the filter code extract the graph description from the output
stream when needed. This is unnecessarily convoluted.
Remove now-obsolete code setting packet durations pre-muxing for CFR
encoded video.
Changes output in the following FATE tests:
* numerous adpcm tests
* ffmpeg-filter_complex_audio
* lavf-asf
* lavf-mkv
* lavf-mkv_attachment
* matroska-encoding-delay
All of these change due to the fact that the output duration is now
the actual input data duration and does not include padding added by
the encoder.
* apng-osample: less wrong packet durations are now passed to the muxer.
They are not entirely correct, because the first frame duration should
be 3 rather than 2. This is caused by the vsync code and should be
addressed later, but this change is a step in the right direction.
* tscc2-mov: last output frame has a duration of 11 rather than 1 - this
corresponds to the duration actually returned by the demuxer.
* film-cvid: video frame durations are now 2 rather than 1 - this
corresponds to durations actually returned by the demuxer and matches
the timestamps.
* mpeg2-ticket6677: durations of some video frames are now 2 rather than
1 - this matches the timestamps.