At present, progress stats are updated at a hardcoded interval of
half a second. For long processes, this can lead to bloated
logs and progress reports.
Users can now set a custom period using option -stats_period
Default is kept at 0.5 seconds.
This way the old max queue size limit based behavior for streams
where each individual packet is large is kept, while for smaller
streams more packets can be buffered (current default is at 50
megabytes per stream).
For some explanation, by default ffmpeg copies packets from before
the appointed seek point/start time and puts them into the local
muxing queue. Before, it getting utilized was much less likely
since as soon as the filter chain was initialized, the encoder
(and thus output stream) was also initialized.
Now, since we will be pushing the encoder initialization to when the
first AVFrame is decoded and filtered - which only happens after
the exact seek point is hit as packets are ignored until then -
this queue will be seeing much more usage.
In more layman's terms, this attempts to fix cases such as where:
- seek point ends up being 5 seconds before requested time.
- audio is set to copy, and thus immediately begins filling the
muxing queue.
- video is being encoded, and thus all received packets are skipped
until the requested time is hit.
Threaded input can increase smoothness of e.g. x11grab significantly. Before
this patch, in order to activate threaded input the user had to specify a
"dummy" additional input, with this change it is no longer required.
Signed-off-by: Marton Balint <cus@passwd.hu>
Currently, ffmpeg inserts scale filter by default in the filter graph
to force the whole decoded stream to scale into the same size with the
first frame. It's not quite make sense in resolution changing cases if
user wants the rawvideo without any scale.
Using autoscale/noautoscale as an output option to indicate whether auto
inserting the scale filter in the filter graph:
-noautoscale or -autoscale 0:
disable the default auto scale filter inserting.
ffmpeg -y -i input.mp4 out1.yuv -noautoscale out2.yuv -autoscale 0 out3.yuv
Update docs.
Suggested-by: Mark Thompson <sw@jkqxz.net>
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: U. Artie Eoff <ullysses.a.eoff@intel.com>
Signed-off-by: Linjie Fu <linjie.fu@intel.com>
The "-deinterlace" was deprecated since d7edd35, over eight years
ago.
Refer to deinterlacing filters instead.
Signed-off-by: Moritz Barsnick <barsnick@gmx.net>
Also documents all options supported by the hwdevice.
This lets users enable all extensions they need without writing their own
instance initialization code.
Add two FAQs about running FFmpeg in the background.
The first explains the use of the -nostdin option in
a straightforward way. Text revised based on review.
The second FAQ starts from a confusing error message,
and leads to the solution, use of the -nostdin option.
The purpose of the second FAQ is to attract web searches
from people having the problem, and offer them a solution.
Add an anchor to the Main Options section of the ffmpeg
documentation, so that the FAQs can link directly there.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
For some strange reason "-t" option was only implemented
for input files while both "-t" and "-to" were available
for use for output files. This made extracting a range from
input file inconvenient.
This patch enables -to option for input so one can do
ffmpeg -ss 1:23:20 -to 1:27:22.3 -i myinput.mkv ...
Signed-off-by: Vitaly _Vi Shukela <vi0oss@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The -map option allows for a trailing ? so that an error is not thrown if
the input stream does not exist.
This capability is extended to the map_channel option.
This allows a ffmpeg command not to break if an input channel does not
exist, which can be of use (for instance, scripts processing audio
channels with sources having unset number of audio channels).
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This only supports one device globally, but more can be used by
passing them with input streams in hw_frames_ctx or by deriving new
devices inside a filter graph with hwmap.
(cherry picked from commit e669db7610)
add a per-stream option for setting the encoder timebase.
the following values are allowed:
0 - for video, use 1/frame_rate, for audio use 1/sample_rate (this is
the default)
-1 - match the input timebase (when possible)
>0 - set the timebase to provided number
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* commit '043b0b9fb1481053b712d06d2c5b772f1845b72b':
Replace leftover uses of -aframes|-dframes|-vframes with -frames:a|d|v
The merge also includes all our own occurences.
Merged-by: Clément Bœsch <u@pkh.me>