Analogous to -enc_stats*, but happens right before muxing. Useful
because bitstream filters and the sync queue can modify packets after
encoding and before muxing. Also has access to the muxing timebase.
There are 8 of them and they are typically used together. Allows to pass
just this struct to forced_kf_apply(), which makes it clear that the
rest of the OutputStream is not accessed there.
Do it in set_dispositions() rather than during stream creation.
Since at this point all other stream information is known, this allows
setting disposition based on metadata, which implements #10015. This
also avoids an extra allocated string in OutputStream that was unused
after of_open().
This is now possible since OutputStream is a child of OutputFile and the
code allocating it can access MuxStream. Avoids the overhead and extra
complexity of allocating two objects instead of one.
Similar to what was previously done for OutputFile/Muxer.
Replace it with an array of streams in each OutputFile. This is a more
accurate reflection of the actual relationship between OutputStream and
OutputFile. This is easier to handle and will allow further
simplifications in future commits.
This is now possible since the code allocating OutputFile can see
sizeof(Muxer). Avoids the overhead and extra complexity of allocating
two objects instead of one.
Similar to what is done e.g. for AVStream/FFStream in lavf.
It has been deprecated in favor of the aresample filter for almost 10
years.
Another thing this option can do is drop audio timestamps and have them
generated by the encoding code or the muxer, but
- for encoding, this can already be done with the setpts filter
- for muxing this should almost never be done as timestamp generation by
the muxer is deprecated, but people who really want to do this can use
the setts bitstream filter
update_video_stats() currently uses OutputStream.data_size to print the
total size of the encoded stream so far and the average bitrate.
However, that field is updated in the muxer thread, right before the
packet is sent to the muxer. Not only is this racy, but the numbers may
not match even if muxing was in the main thread due to bitstream
filters, filesize limiting, etc.
Introduce a new counter, data_size_enc, for total size of the packets
received from the encoder and use that in update_video_stats(). Rename
data_size to data_size_mux to indicate its semantics more clearly.
No synchronization is needed for data_size_mux, because it is only read
in the main thread in print_final_stats(), which runs after the muxer
threads are terminated.
Same issues apply to it as to -shortest.
Changes the results of the following tests:
- matroska-flac-extradata-update
The test reencodes two input FLAC streams into three output FLAC
streams. The last output stream is limited to 8 frames. The current
code results in the first two output streams having 12 frames, after
this commit all three streams have 8 frames and are the same length.
This new result is better, since it is predictable.
- mkv-1242
The test streamcopies one video and one audio stream, video is limited
to 11 frames. The new result shortens the audio stream so that it is
not longer than the video.
The -shortest option (which finishes the output file at the time the
shortest stream ends) is currently implemented by faking the -t option
when an output stream ends. This approach is fragile, since it depends
on the frames/packets being processed in a specific order. E.g. there
are currently some situations in which the output file length will
depend unpredictably on unrelated factors like encoder delay. More
importantly, the present work aiming at splitting various ffmpeg
components into different threads will make this approach completely
unworkable, since the frames/packets will arrive in effectively random
order.
This commit introduces a "sync queue", which is essentially a collection
of FIFOs, one per stream. Frames/packets are submitted to these FIFOs
and are then released for further processing (encoding or muxing) when
it is ensured that the frame in question will not cause its stream to
get ahead of the other streams (the logic is similar to libavformat's
interleaving queue).
These sync queues are then used for encoding and/or muxing when the
-shortest option is specified.
A new option – -shortest_buf_duration – controls the maximum number of
queued packets, to avoid runaway memory usage.
This commit changes the results of the following tests:
- copy-shortest[12]: the last audio frame is now gone. This is
correct, since it actually outlasts the last video frame.
- shortest-sub: the video packets following the last subtitle packet are
now gone. This is also correct.
The following commits will add a new buffering stage after bitstream
filters, which should not be taken into account for choosing next
output.
OutputStream.last_mux_dts is also used by the muxing code to make up
missing DTS values - that field is now moved to the muxer-private
MuxStream object.
It is currently called from two places:
- output_packet() in ffmpeg.c, which submits the newly available output
packet to the muxer
- from of_check_init() in ffmpeg_mux.c after the header has been
written, to flush the muxing queue
Some packets will thus be processed by this function twice, so it
requires an extra parameter to indicate the place it is called from and
avoid modifying some state twice.
This is fragile and hard to follow, so split this function into two.
Also rename of_write_packet() to of_submit_packet() to better reflect
its new purpose.
The muxing queue currently lives in OutputStream, which is a very large
struct storing the state for both encoding and muxing. The muxing queue
is only used by the code in ffmpeg_mux, so it makes sense to restrict it
to that file.
This makes the first step towards reducing the scope of OutputStream.
The current code postpones closing the files until after printing the
final report, which accesses the output file size. Deal with this by
storing the final file size before closing the file.