Add adaptive_i/b feature to hevc_qsv. Adaptive_i allows changing of
frame type from P and B to I. Adaptive_b allows changing of frame type
frome B to P.
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
According to its documentation it returns "pts of the last muxed packet
+ its duration", but the value it actually returns right now is
(possibly guessed) dts after muxer-internal bitstream filtering (if
any).
This function was added for ffmpeg.c, but it is not used there anymore.
Since the value it returns is ill-defined and so inappropriate for any
serious use, deprecate it.
The present default value of 0 will render the overlay video invisible.
A default of 1.0 is consistent with most common use cases.
Signed-off-by: Fei Wang <fei.w.wang@intel.com>
Reviewed-by: Philip Langdale <philipl@overt.org>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
The "AYUV" format is defined by Microsoft as their preferred format for
4:4:4 content, and so it is the format used by Intel VAAPI and QSV.
As Microsoft like to define their byte ordering in little-endian
fashion, the memory order is reversed, and so our pix_fmt, which
follows memory order, has a reversed name (VUYA).
This functionally already exists, but as pointed out in #9672 and #9673,
requiring users to manually include filters is clumsy, error-prone and
hard to use together with tools like ffplay.
To streamline ICC profile support, add a new AVCodecContext flag to
globally enable reading and writing ICC profiles, automatically, for all
appropriate media types.
Note that this commit only includes the new API. The implementation is
split off to separate commits for readability.
Signed-off-by: Niklas Haas <git@haasn.dev>
The -shortest option (which finishes the output file at the time the
shortest stream ends) is currently implemented by faking the -t option
when an output stream ends. This approach is fragile, since it depends
on the frames/packets being processed in a specific order. E.g. there
are currently some situations in which the output file length will
depend unpredictably on unrelated factors like encoder delay. More
importantly, the present work aiming at splitting various ffmpeg
components into different threads will make this approach completely
unworkable, since the frames/packets will arrive in effectively random
order.
This commit introduces a "sync queue", which is essentially a collection
of FIFOs, one per stream. Frames/packets are submitted to these FIFOs
and are then released for further processing (encoding or muxing) when
it is ensured that the frame in question will not cause its stream to
get ahead of the other streams (the logic is similar to libavformat's
interleaving queue).
These sync queues are then used for encoding and/or muxing when the
-shortest option is specified.
A new option – -shortest_buf_duration – controls the maximum number of
queued packets, to avoid runaway memory usage.
This commit changes the results of the following tests:
- copy-shortest[12]: the last audio frame is now gone. This is
correct, since it actually outlasts the last video frame.
- shortest-sub: the video packets following the last subtitle packet are
now gone. This is also correct.
Using parameter from AVCodecContext to reset qsv codec is more suitable
for MFXVideoENCODE_Reset()'s usage. Per-frame metadata is more suitable
for the usage of mfxEncodeCtrl being passed to
MFXVideoENCODE_EncodeFrameAsync(). Now change it to use the value
from AVCodecContext.
Because q->param is passed to both "in" and "out" parameters when call
MFXVideoENCODE_Query(), the value in q->param may be changed. New
variables are added to store old configuration, so that we can detect
real parameter change.
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
The only duration field currently present in AVFrame is pkt_duration,
which is semantically restricted to those frames that are output by
decoders.
Add a new field that stores the frame's duration without regard for how
that frame was produced. Deprecate pkt_duration.
This is a per-file input option that adjusts an input's timestamps
with reference to another input, so that emitted packet timestamps
account for the difference between the start times of the two inputs.
Typical use case is to sync two or more live inputs such as from capture
devices. Both the target and reference input source timestamps should be
based on the same clock source.
If either input lacks starting timestamps, then no sync adjustment is made.
GSoC'22
libavfilter/vf_chromakey_cuda.cu:the CUDA kernel for the filter
libavfilter/vf_chromakey_cuda.c: the C side that calls the kernel and gets user input
libavfilter/allfilters.c: added the filter to it
libavfilter/Makefile: added the filter to it
cuda/cuda_runtime.h: added two math CUDA functions that are used in the filter
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
Add a short hand parameter for making a fixed size grid. The existing
xstack layout parameter syntax gets tedious if all one wants is a
matrix like grid of the input streams. Add a grid option to the xstack
filter that simplifies this use case by simply specifying the number of
rows and columns instead of specific x/y co-ordinate for each stream.
Also updating the filter documentation to explain the new option.
Signed-off-by: Vignesh Venkatasubramanian <vigneshv@google.com>
-shortest stops 'recording' when the shortest output stream ends. The
native or even seek-adjusted duration of the source input stream isn't
considered.
This allows for wider compatibility with older devices, such as those
running iOS 3. The only difference between HLS version 2 and version 3 is
that version 3 supports non-integer EXTINF values, and as such, we can
default to version 2 if we're using whole-integer EXTINFs anyways, when
`-hls_flags round_durations` is set.
As this code seems to otherwise consistently use the lowest compatible
version, this seems to fit in properly with existing behavior.
Testing confirms with that this patch, HLS output can work all the way back
to iOS 3.
Reviewed-by: Steven Liu <liuqi05@kuaishou.com>
Signed-off-by: Lucy <lucy@absolucy.moe>
Enable dynamic QP configuration in runtime on qsv encoder. Through
AVFrame->metadata, we can set key "qsv_config_qp" to change QP
configuration when we encode video in CQP mode.
Signed-off-by: Yue Heng <yue.heng@intel.com>
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Reduces default fragment size from the pulse audio default of 2 sec to 50 ms.
This also has an effect on the size of the returned frames, which will be
around 50 ms as well, making timestamps more accurate.
This should fix the regression in ticket #9776.
Pulseaudio timestamps for monitor sources are still pretty inaccurate for me,
but I don't see how else should we query latencies from the library.
Signed-off-by: Marton Balint <cus@passwd.hu>
This enables printing to a resource specified with -o OUTPUT.
In case the output is not specified, prints to stdout as usual.
Address issue: http://trac.ffmpeg.org/ticket/8024
Signed-off-by: Marton Balint <cus@passwd.hu>
This new function makes it possible to use avio_printf() functionality from
a function taking a variable list of arguments.
Signed-off-by: Marton Balint <cus@passwd.hu>
Introduce fifo_size and overrun_nonfatal params to configure fifo buffer
behavior.
Use newly introduced RIST_DATA_FLAGS_OVERFLOW flag to check for overrun
and error out in that case.
Signed-off-by: Marton Balint <cus@passwd.hu>
To do more accurate QP control, add min/max QP control on I/P/B frame
separately to qsv encoder. qmax and qmin still work but newly-added
options have higher priority.
Signed-off-by: Yue Heng <yue.heng@intel.com>
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Since every DLL can use an individual CRT on Windows, having
an exported function that opens a FILE* won't work if that
FILE* is going to be used from a different DLL (or from user
application code).
Internally within the libraries, the issue can be worked around
by duplicating the function in all libraries (this already happened
implicitly because the function resided in file_open.c) and renaming
the function to ff_fopen_utf8 (so that it doesn't end up exported from
the DLLs) and duplicating it in all libraries that use it.
This makes the avpriv_fopen_utf8 / ff_fopen_utf8 function work in
the exact same way as the existing avpriv_open / ff_open, with the
same setup as introduced in e743e7ae6e.
That mechanism doesn't work for external users, thus deprecate the
existing function.
Signed-off-by: Martin Storsjö <martin@martin.st>
Required to remux m2ts to mkv
Minor changes and porting to FFBitStreamFilter done by the committer.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Previously, the default timebase caused two warnings during decoding about not being able to update timestamps for skipped and discarded samples, respectively.
Signed-off-by: Andreas Unterweger <dustsigns@gmail.com>
The last encoded frame is now fetched on EOF. It was previously left in the encoder and caused a "1 frame left in queue" warning.
Signed-off-by: Andreas Unterweger <dustsigns@gmail.com>
This filter is designed to parse embedded ICC profiles and attempt
extracting colorspace tags from them, updating the AVFrame metadata
accordingly.
This is intentionally made a separate filter, rather than being part of
libavcodec itself, so that it's an opt-in behavior for the time being.
This also gives the user more flexibility to e.g. first attach an ICC
profile and then also set the colorspace tags from it.
This makes #9673 possible, though not automatic.
Signed-off-by: Niklas Haas <git@haasn.dev>
This filter is designed to specifically cover the task of generating ICC
profiles (and attaching them to output frames) on demand. Other tasks,
such as ICC profile loading/stripping, or ICC profile application, are
better left to separate filters (or included into e.g. vf_setparams).
Signed-off-by: Niklas Haas <git@haasn.dev>