Change the main loop and every component (demuxers, decoders, filters,
encoders, muxers) to use the previously added transcode scheduler. Every
instance of every such component was already running in a separate
thread, but now they can actually run in parallel.
Changes the results of ffmpeg-fix_sub_duration_heartbeat - tested by
JEEB to be more correct and deterministic.
Partially fixes ticket #798
Reviewed-by: James Almer <jamrial@gmail.com>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Peter Ross <pross@xvid.org>
Fix rendering of int values within a side data element, which was
broken since commit d2d3a83ad9, where the side data element was
correctly marked as a variable fields element. Logic to render a
string variable was implemented already, but it was not implemented
for the int fields path, which was enabled by that commit.
Also, code and schema is changed in order to account for multiple
variable-fields elements - such as side data, contained within the
same parent. Previously it was assumed that a single variable-fields
element was contained within the parent, which was the case for tags,
but is not the case for side-data.
Previously data was rendered as:
<side_data_list>
<side_data side_data_type="CPB properties" max_bitrate="0" min_bitrate="0" avg_bitrate="0" buffer_size="327680" vbv_delay="-1"/>
</side_data_list>
Now as:
<side_data_list>
<side_data type="CPB properties">
<side_datum key="side_data_type" value="CPB properties"/>
<side_datum key="max_bitrate" value="0"/>
<side_datum key="min_bitrate" value="0"/>
<side_datum key="avg_bitrate" value="0"/>
<side_datum key="buffer_size" value="49152"/>
<side_datum key="vbv_delay" value="-1"/>
</side_data>
</side_data_list>
Variable-fields elements are rendered as a containing element wrapping
generic key/values elements, enabling use of strict XML schema.
Fix trac issue:
https://trac.ffmpeg.org/ticket/10613
It is badly named (should have been -top_field_first, or at least -tff),
underdocumented and underspecified, and (most importantly) entirely
redundant with the setfield filter.
Add demuxer to probe raw vvc and parse vvcc byte stream format.
Co-authored-by: Nuo Mi <nuomi2021@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
I've been sitting on this for 3 1/2 years now(!), and I finally got
around to fixing the loose ends and convincing myself that it was
correct. It follows the same basic structure as yadif_cuda, including
leaving out the edge handling, to avoid expensive branching.
This patch add another ARIB caption decoder using libaribcaption
external library.
Unlike libaribb24, it supports 3 types of subtitle outputs:
* text: plain text
* ass: ASS formatted text
* bitmap: bitmap image
Default subtitle type is ass as same as libaribb24.
Advantages compared with libaribb24 on ASS subtitle are:
* Subtitle positioning.
* Multi-rect subtitle: some captions are displayed at different
position at a time.
* More stability and reproducibility.
To compile with this feature:
* libaribcaption external library has to be pre-installed.
https://github.com/xqq/libaribcaption
* configure with `--enable-libaribcaption` option.
`--enable-libaribb24` and `--enable-libaribcaption` options are
not exclusive. If both enabled, libaribcaption precedes as
order listed in `libavcodec/allcodecs.c`.
Signed-off-by: rcombs <rcombs@rcombs.me>
Many filters accept user-provided data that is cumbersome to provide as
text strings - e.g. binary files or very long text. For that reason such
filters typically provide a option whose value is the path from which
the filter loads the actual data.
However, filters doing their own IO internally is a layering violation
that the callers may not expect, and is thus best avoided. With the
recently introduced graph segment parsing API, loading option values
from files can now be handled by the caller.
This commit makes use of the new API in ffmpeg CLI. Any option name in
the filtergraph syntax can now be prefixed with a slash '/'. This will
cause ffmpeg to interpret the value as the path to load the actual value
from.
Analogous to -enc_stats*, but happens right before muxing. Useful
because bitstream filters and the sync queue can modify packets after
encoding and before muxing. Also has access to the muxing timebase.
Splits the currently handled subtitle at random access point
packets that can be configured to follow a specific output stream.
Currently only subtitle streams which are directly mapped into the
same output in which the heartbeat stream resides are affected.
This way the subtitle - which is known to be shown at this time
can be split and passed to muxer before its full duration is
yet known. This is also a drawback, as this essentially outputs
multiple subtitles from a single input subtitle that continues
over multiple random access points. Thus this feature should not
be utilized in cases where subtitle output latency does not matter.
Co-authored-by: Andrzej Nadachowski <andrzej.nadachowski@24i.com>
Co-authored-by: Bernard Boulay <bernard.boulay@24i.com>
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
Customized SSIM for various projections (and stereo formats) of 360 images and videos.
Further contributions by:
Ashok Mathew Kuruvilla
Matthieu Patou
Yu-Hui Wu
Anton Khirnov
Suggested-By: ffmpeg@fb.com
Signed-off-by: Anton Khirnov <anton@khirnov.net>
It is a URL rewriter for IPFS gateways, not an actual implementation of
IPFS, and naming it as such was both incorrect and misleading.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Starting with an h264 implementation. Can be extended to support other codecs.
A few caveats:
- OpenGOP streams are currently not supported. The firt packet must be an IDR
frame.
- In some streams, a few frames at the end may not get a reordered PTS when
they reference frames past EOS. The code added to derive timestamps from
previous frames needs to extended.
Addresses ticket #502.
Signed-off-by: James Almer <jamrial@gmail.com>
With the necessary pixel formats defined, we can now expose support for
the remaining 10/12bit combinations that VAAPI can handle.
Specifically, we are adding support for:
* HEVC
** 12bit 420
** 10bit 422
** 12bit 422
** 10bit 444
** 12bit 444
* VP9
** 10bit 444
** 12bit 444
These obviously require actual hardware support to be usable, but where
that exists, it is now enabled.
Note that unlike YUVA/YUVX, the Intel driver does not formally expose
support for the alphaless formats XV30 and XV360, and so we are
implicitly discarding the alpha from the decoder and passing undefined
values for the alpha to the encoder. If a future encoder iteration was
to actually do something with the alpha bits, we would need to use a
formal alpha capable format or the encoder would need to explicitly
accept the alphaless format.
Sufficiently recent Intel hardware is able to do encoding of 8bit 4:4:4
content in HEVC and VP9. The main requirement here is that the frames
must be provided in the AYUV format.
Enabling support is done by adding the appropriate encoding profiles
and noting that AYUV is officially a four channel format with alpha so
we must state that we expect all four channels.
Now that we have a combination of capable hardware (new enough Intel)
and a mutually understood format ("AYUV"), we can declare support for
decoding 8bit 4:4:4 content via VAAPI.
This requires listing AYUV as a supported format, and then adding VAAPI
as a supported hwaccel for the relevant codecs (HEVC and VP9). I also
had to add VP9Profile1 to the set of supported profiles for VAAPI as it
was never relevant before.
The -shortest option (which finishes the output file at the time the
shortest stream ends) is currently implemented by faking the -t option
when an output stream ends. This approach is fragile, since it depends
on the frames/packets being processed in a specific order. E.g. there
are currently some situations in which the output file length will
depend unpredictably on unrelated factors like encoder delay. More
importantly, the present work aiming at splitting various ffmpeg
components into different threads will make this approach completely
unworkable, since the frames/packets will arrive in effectively random
order.
This commit introduces a "sync queue", which is essentially a collection
of FIFOs, one per stream. Frames/packets are submitted to these FIFOs
and are then released for further processing (encoding or muxing) when
it is ensured that the frame in question will not cause its stream to
get ahead of the other streams (the logic is similar to libavformat's
interleaving queue).
These sync queues are then used for encoding and/or muxing when the
-shortest option is specified.
A new option – -shortest_buf_duration – controls the maximum number of
queued packets, to avoid runaway memory usage.
This commit changes the results of the following tests:
- copy-shortest[12]: the last audio frame is now gone. This is
correct, since it actually outlasts the last video frame.
- shortest-sub: the video packets following the last subtitle packet are
now gone. This is also correct.
GSoC'22
libavfilter/vf_chromakey_cuda.cu:the CUDA kernel for the filter
libavfilter/vf_chromakey_cuda.c: the C side that calls the kernel and gets user input
libavfilter/allfilters.c: added the filter to it
libavfilter/Makefile: added the filter to it
cuda/cuda_runtime.h: added two math CUDA functions that are used in the filter
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
Support for VDPAU accelerated AV1 decoding was added with libvdpau-1.5.
Support for the same in ffmpeg is added with this patch. Profiles
related to VDPAU AV1 can be found in latest vdpau.h present in
libvdpau-1.5.
Add AV1 VDPAU to list of hwaccels and supported formats
Added file vdpau_av1.c and Modified configure to add VDPAU AV1 support.
Mapped AV1 profiles to VDPAU AV1 profiles. Populated the codec specific
params that need to be passed to VDPAU.
Signed-off-by: Philip Langdale <philipl@overt.org>