This enables overriding the rotation as well as horizontal/vertical
flip state of a specific video stream on the input side.
Additionally, switch the singular test that was utilizing the rotation
metadata to instead override the input display rotation, thus leading
to the same result.
It has been deprecated in favor of the aresample filter for almost 10
years.
Another thing this option can do is drop audio timestamps and have them
generated by the encoding code or the muxer, but
- for encoding, this can already be done with the setpts filter
- for muxing this should almost never be done as timestamp generation by
the muxer is deprecated, but people who really want to do this can use
the setts bitstream filter
The -shortest option (which finishes the output file at the time the
shortest stream ends) is currently implemented by faking the -t option
when an output stream ends. This approach is fragile, since it depends
on the frames/packets being processed in a specific order. E.g. there
are currently some situations in which the output file length will
depend unpredictably on unrelated factors like encoder delay. More
importantly, the present work aiming at splitting various ffmpeg
components into different threads will make this approach completely
unworkable, since the frames/packets will arrive in effectively random
order.
This commit introduces a "sync queue", which is essentially a collection
of FIFOs, one per stream. Frames/packets are submitted to these FIFOs
and are then released for further processing (encoding or muxing) when
it is ensured that the frame in question will not cause its stream to
get ahead of the other streams (the logic is similar to libavformat's
interleaving queue).
These sync queues are then used for encoding and/or muxing when the
-shortest option is specified.
A new option – -shortest_buf_duration – controls the maximum number of
queued packets, to avoid runaway memory usage.
This commit changes the results of the following tests:
- copy-shortest[12]: the last audio frame is now gone. This is
correct, since it actually outlasts the last video frame.
- shortest-sub: the video packets following the last subtitle packet are
now gone. This is also correct.
This is a per-file input option that adjusts an input's timestamps
with reference to another input, so that emitted packet timestamps
account for the difference between the start times of the two inputs.
Typical use case is to sync two or more live inputs such as from capture
devices. Both the target and reference input source timestamps should be
based on the same clock source.
If either input lacks starting timestamps, then no sync adjustment is made.
-shortest stops 'recording' when the shortest output stream ends. The
native or even seek-adjusted duration of the source input stream isn't
considered.
Currently, the code doing this is spread over several places and may
behave in unexpected ways. E.g. automatic 'default' marking is only done
for streams fed by complex filtergraphs. It is also applied in the order
in which the output streams are initialized, which is effectively
random.
Move processing the dispositions at the end of open_output_file(), when
we already have all the necessary information.
Apply the automatic default marking only if no explicit -disposition
options were supplied by the user, and apply it to the first stream of
each type (excluding attached pics) when there is more than one stream
of that type and no default markings were copied from the input streams.
Explicitly document the new behavior.
Changes the results of some tests, where the output file gets a default
disposition, while it previously did not.
UPD: Rebase of last patch set over current master and use DX9 as default device type.
Makes selection of dxva2/DX9 device type by default as before with explicit d3d11va/DX11 usage to cover more HW configurations.
Added warning message to expect changing default device type in the future.
Fixes TGL / AV1 decode as requires DX11 with explicit DX11 type
selection.
Add headless/multi adapter support and fixes:
https://trac.ffmpeg.org/ticket/7511https://trac.ffmpeg.org/ticket/6827http://ffmpeg.org/pipermail/ffmpeg-trac/2017-November/041901.htmlhttps://trac.ffmpeg.org/ticket/7933338fbcd5bbhttps://github.com/jellyfin/jellyfin/issues/2626#issuecomment-602153952
Any other fixes are welcome including OpenCL interop patch since I don't have proper setup to validate this use case
Decoding, encoding, transcoding have been validated.
child_device_type option is responsible for d3d11va/dxva2 device selection
Usage examples:
DirectX 11:
-init_hw_device qsv:hw,child_device_type=d3d11va
-init_hw_device qsv:hw,child_device_type=d3d11va,child_device=0
OR
-init_hw_device d3d11va=dx -init_hw_device qsv@dx
DirectX 9 is still supported but requires explicit selection:
-init_hw_device qsv:hw,child_device_type=dxva2
OR
-init_hw_device dxva2=dx -init_hw_device qsv@dx
Signed-off-by: Artem Galin <artem.galin@intel.com>
At present, progress stats are updated at a hardcoded interval of
half a second. For long processes, this can lead to bloated
logs and progress reports.
Users can now set a custom period using option -stats_period
Default is kept at 0.5 seconds.
This way the old max queue size limit based behavior for streams
where each individual packet is large is kept, while for smaller
streams more packets can be buffered (current default is at 50
megabytes per stream).
For some explanation, by default ffmpeg copies packets from before
the appointed seek point/start time and puts them into the local
muxing queue. Before, it getting utilized was much less likely
since as soon as the filter chain was initialized, the encoder
(and thus output stream) was also initialized.
Now, since we will be pushing the encoder initialization to when the
first AVFrame is decoded and filtered - which only happens after
the exact seek point is hit as packets are ignored until then -
this queue will be seeing much more usage.
In more layman's terms, this attempts to fix cases such as where:
- seek point ends up being 5 seconds before requested time.
- audio is set to copy, and thus immediately begins filling the
muxing queue.
- video is being encoded, and thus all received packets are skipped
until the requested time is hit.
Threaded input can increase smoothness of e.g. x11grab significantly. Before
this patch, in order to activate threaded input the user had to specify a
"dummy" additional input, with this change it is no longer required.
Signed-off-by: Marton Balint <cus@passwd.hu>
Currently, ffmpeg inserts scale filter by default in the filter graph
to force the whole decoded stream to scale into the same size with the
first frame. It's not quite make sense in resolution changing cases if
user wants the rawvideo without any scale.
Using autoscale/noautoscale as an output option to indicate whether auto
inserting the scale filter in the filter graph:
-noautoscale or -autoscale 0:
disable the default auto scale filter inserting.
ffmpeg -y -i input.mp4 out1.yuv -noautoscale out2.yuv -autoscale 0 out3.yuv
Update docs.
Suggested-by: Mark Thompson <sw@jkqxz.net>
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: U. Artie Eoff <ullysses.a.eoff@intel.com>
Signed-off-by: Linjie Fu <linjie.fu@intel.com>
The "-deinterlace" was deprecated since d7edd35, over eight years
ago.
Refer to deinterlacing filters instead.
Signed-off-by: Moritz Barsnick <barsnick@gmx.net>
Also documents all options supported by the hwdevice.
This lets users enable all extensions they need without writing their own
instance initialization code.