Vendor id will help to select desired device in case of kernel driver is
unknow or unsupported, for vendor may support different kernel driver on
different platforms.
Signed-off-by: Fei Wang <fei.w.wang@intel.com>
This allows one complex filtergraph's output to be sent as input to
another one, which is useful in certain situations (one is described in
the docs).
Chaining filtergraphs was already effectively possible by using a
wrapped_avframe encoder connected to a loopback decoder, but it is ugly,
non-obvious and inefficient.
The {br}/{abr} directives are not limited to post-encoding, they can
also be used pre-muxing. The already-present {packet} tag describes this
more accurately, so just drop the assertions.
Always use the functionality of the latter, which makes more sense as it
avoids losing keyframes due to vsync code dropping frames.
Deprecate the 'source_no_drop' value, as it is now redundant.
It is badly named (should have been -top_field_first, or at least -tff),
underdocumented and underspecified, and (most importantly) entirely
redundant with the setfield filter.
Stop claiming the argument is always a floating point number, which
* confuses floating point and decimal numbers
* is not always true even accounting for the above point
Read the timebase from FrameData rather than the input stream. This
should fix#10393 and generally be more reliable.
Replace the use of '-1' to indicate demuxing timebase with the string
'demux'. Also allow to request filter timebase with
'-enc_time_base filter'.
This option adds a long string of numbers to the progress line, where
i-th number contains the base-2 logarithm of the number of times a frame
with this QP value was seen by print_report().
There are multiple problems with this feature:
* despite this existing since 2005, web search shows no indication
that it was ever useful for any meaningful purpose;
* the format of what is printed is entirely undocumented, one has to
find it out from the source code;
* QP values above 31 are silently ignored;
* it only works with one video stream;
* as it relies on global state, it is in conflict with ongoing
architectural changes.
It then seems that the nontrivial cost of maintaining this option is not
worth its negligible (or possibly negative - since it pollutes the
already large option space) value.
Users who really need similar functionality can also implement it
themselves using -vstats.
In particular, add a sentence to introduce the example, and add a
simpler starting example with no options.
Also use different format for input.avi and output.mp4, to convey
that the conversion also works on the container format.
Address issue:
http://trac.ffmpeg.org/ticket/8730
Reference drawtext textfile option and ffmpeg -filter_complex_script
and -filter_script as possible solutions to avoid shell escaping.
Address issue:
http://trac.ffmpeg.org/ticket/9008
Analogous to -enc_stats*, but happens right before muxing. Useful
because bitstream filters and the sync queue can modify packets after
encoding and before muxing. Also has access to the muxing timebase.
Splits the currently handled subtitle at random access point
packets that can be configured to follow a specific output stream.
Currently only subtitle streams which are directly mapped into the
same output in which the heartbeat stream resides are affected.
This way the subtitle - which is known to be shown at this time
can be split and passed to muxer before its full duration is
yet known. This is also a drawback, as this essentially outputs
multiple subtitles from a single input subtitle that continues
over multiple random access points. Thus this feature should not
be utilized in cases where subtitle output latency does not matter.
Co-authored-by: Andrzej Nadachowski <andrzej.nadachowski@24i.com>
Co-authored-by: Bernard Boulay <bernard.boulay@24i.com>
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>