Will remove native backend, so change the default backend in filters,
and also remove the python scripts which generate native model file.
Signed-off-by: Ting Fu <ting.fu@intel.com>
Some testing revealed this to be a very efficient and reliable method of
ingesting GPU frames into libplacebo, so it's a good idea to give as an
example.
This example being first is now misleading because round-tripping
through hwdownload/hwupload is neither required nor recommended. Also,
the comment about avoiding format conversion is unnecessary because
`libplacebo` will now inherit the input frame format by default.
This option adds a long string of numbers to the progress line, where
i-th number contains the base-2 logarithm of the number of times a frame
with this QP value was seen by print_report().
There are multiple problems with this feature:
* despite this existing since 2005, web search shows no indication
that it was ever useful for any meaningful purpose;
* the format of what is printed is entirely undocumented, one has to
find it out from the source code;
* QP values above 31 are silently ignored;
* it only works with one video stream;
* as it relies on global state, it is in conflict with ongoing
architectural changes.
It then seems that the nontrivial cost of maintaining this option is not
worth its negligible (or possibly negative - since it pollutes the
already large option space) value.
Users who really need similar functionality can also implement it
themselves using -vstats.
The function now accepts an existing buffer to avoid unnecessary allocations,
as well as only reporting the needed amount of bytes if you pass a NULL pointer
as input for data.
For this, both parameters become input and output, as well as making data
optional. This is backwards compatible, and as such not breaking any existing
use of the function in external code (if there's any).
Signed-off-by: James Almer <jamrial@gmail.com>
In particular, add a sentence to introduce the example, and add a
simpler starting example with no options.
Also use different format for input.avi and output.mp4, to convey
that the conversion also works on the container format.
Address issue:
http://trac.ffmpeg.org/ticket/8730
Document metadata entries set by the filter, extend and clarify
options, add additional example showing how to extract the generated
data.
Address issue:
http://trac.ffmpeg.org/ticket/8766
Drop mention of unsupported N:M syntax, dropped since 0ed61546c4.
Also, drop reference of common expression constants, and fix
description of a expression parameter.
Fix issue:
http://trac.ffmpeg.org/ticket/9974
Reference drawtext textfile option and ffmpeg -filter_complex_script
and -filter_script as possible solutions to avoid shell escaping.
Address issue:
http://trac.ffmpeg.org/ticket/9008
This patch add another ARIB caption decoder using libaribcaption
external library.
Unlike libaribb24, it supports 3 types of subtitle outputs:
* text: plain text
* ass: ASS formatted text
* bitmap: bitmap image
Default subtitle type is ass as same as libaribb24.
Advantages compared with libaribb24 on ASS subtitle are:
* Subtitle positioning.
* Multi-rect subtitle: some captions are displayed at different
position at a time.
* More stability and reproducibility.
To compile with this feature:
* libaribcaption external library has to be pre-installed.
https://github.com/xqq/libaribcaption
* configure with `--enable-libaribcaption` option.
`--enable-libaribb24` and `--enable-libaribcaption` options are
not exclusive. If both enabled, libaribcaption precedes as
order listed in `libavcodec/allcodecs.c`.
Signed-off-by: rcombs <rcombs@rcombs.me>
These fields are supposed to store information about the packet the
frame was decoded from, specifically the byte offset it was stored at
and its size.
However,
- the fields are highly ad-hoc - there is no strong reason why
specifically those (and not any other) packet properties should have a
dedicated field in AVFrame; unlike e.g. the timestamps, there is no
fundamental link between coded packet offset/size and decoded frames
- they only make sense for frames produced by decoding demuxed packets,
and even then it is not always the case that the encoded data was
stored in the file as a contiguous sequence of bytes (in order for pos
to be well-defined)
- pkt_pos was added without much explanation, apparently to allow
passthrough of this information through lavfi in order to handle byte
seeking in ffplay. That is now implemented using arbitrary user data
passthrough in AVFrame.opaque_ref.
- several filters use pkt_pos as a variable available to user-supplied
expressions, but there seems to be no established motivation for using them.
- pkt_size was added for use in ffprobe, but that too is now handled
without using this field. Additonally, the values of this field
produced by libavcodec are flawed, as described in the previous
ffprobe conversion commit.
In summary - these fields are ill-defined and insufficiently motivated,
so deprecate them.
This has not been functional since a year ago, including in our current
minimum dependency of libplacebo (v4.192.0). It also causes build errors
against libplacebo v6, so it needs to be removed from the code. We can
keep the option around for now, but it should also be removed soon.
Signed-off-by: Niklas Haas <git@haasn.dev>
Signed-off-by: James Almer <jamrial@gmail.com>
In texi source, @ characters need to be escaped.
This fixes the following build errors:
community.texi:59: unknown command `ffmpeg'
community.texi:143: unknown command `ffmpeg'
Signed-off-by: Martin Storsjö <martin@martin.st>
Remove doc/dev_communit markup files completely as they are at the wrong place.
Create a new community page, merging all of doc/dev_community and subsection Code of Conduct into a common place.
The corresponding patch to ffmpeg-web puts the Organisation & Code of Conduct into a seperate community chapter on the FFmpeg website.
Described in HEVC spec A.3.7. Bump minor version and add APIchanges
entry for new added profile.
Signed-off-by: Linjie Fu <linjie.justin.fu@gmail.com>
Signed-off-by: Fei Wang <fei.w.wang@intel.com>
Their usefulness is questionable, very few decoders set them, and their type
should have been int64_t. A replacement field can be added later if a valid use
case is found.
Signed-off-by: Marton Balint <cus@passwd.hu>
Frame counters can overflow relatively easily (INT_MAX number of frames is
slightly more than 1 year for 60 fps content), so make sure we use 64 bit
values for them.
Also deprecate the old 32 bit frame_number attribute.
Signed-off-by: Marton Balint <cus@passwd.hu>
Many filters accept user-provided data that is cumbersome to provide as
text strings - e.g. binary files or very long text. For that reason such
filters typically provide a option whose value is the path from which
the filter loads the actual data.
However, filters doing their own IO internally is a layering violation
that the callers may not expect, and is thus best avoided. With the
recently introduced graph segment parsing API, loading option values
from files can now be handled by the caller.
This commit makes use of the new API in ffmpeg CLI. Any option name in
the filtergraph syntax can now be prefixed with a slash '/'. This will
cause ffmpeg to interpret the value as the path to load the actual value
from.
Callers currently have two ways of adding filters to a graph - they can
either
- create, initialize, and link them manually
- use one of the avfilter_graph_parse*() functions, which take a
(typically end-user-written) string, split it into individual filter
definitions+options, then create filters, apply options, initialize
filters, and finally link them - all based on information from this
string.
A major problem with the second approach is that it performs many
actions as a single atomic unit, leaving the caller no space to
intervene in between. Such intervention would be useful e.g. to
- modify filter options;
- supply hardware device contexts;
both of which typically must be done before the filter is initialized.
Callers who need such intervention are then forced to invent their own
filtergraph parsing, which is clearly suboptimal.
This commit aims to address this problem by adding a new modular
filtergraph parsing API. It adds a new avfilter_graph_segment_parse()
function to parse a string filtergraph description into an intermediate
tree-like representation (AVFilterGraphSegment and its children).
This intermediate form may then be applied step by step using further
new avfilter_graph_segment*() functions, with user intervention possible
between each step.
This script generates the current general assembly voters according to
the criteria of '20 commits in the last 36 months'.
Signed-off-by: J. Dekker <jdek@itanimul.li>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
libavutil/color_utils contains some avpriv_ symbols that map
enum AVTransferCharacteristic values to gamma-curve approximations and
to the actual transfer functions to invert them (i.e. -> linear).
There's two issues with this:
(1) avpriv is evil and should be avoided whenever possible
(2) libavutil/csp.h exposes a public API for handling color that
already handles primaries and matricies
I don't see any reason this API has to be private, so this commit takes
the functionality from avutil/color_utils and merges it into avutil/csp
with an exposed av_ API rather than the previous avpriv_ API.
Every reference to the previous API has been updated to point to the
new one. color_utils.h has been deleted as well. This should not break
any applications as it only contained avpriv_ symbols in the first
place, so nothing in that header could be referenced by other
applications.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Analogous to -enc_stats*, but happens right before muxing. Useful
because bitstream filters and the sync queue can modify packets after
encoding and before muxing. Also has access to the muxing timebase.
Current HLS implementation simply skip a failed segment to catch up
the stream, but this is not optimal for some use cases like livestream
recording.
Add an option to retry a failed segment to ensure the output file is
a complete stream.
Signed-off-by: gnattu <gnattuoc@me.com>
Reviewed-by: Steven Liu <liuqi05@kuaishou.com>
Splits the currently handled subtitle at random access point
packets that can be configured to follow a specific output stream.
Currently only subtitle streams which are directly mapped into the
same output in which the heartbeat stream resides are affected.
This way the subtitle - which is known to be shown at this time
can be split and passed to muxer before its full duration is
yet known. This is also a drawback, as this essentially outputs
multiple subtitles from a single input subtitle that continues
over multiple random access points. Thus this feature should not
be utilized in cases where subtitle output latency does not matter.
Co-authored-by: Andrzej Nadachowski <andrzej.nadachowski@24i.com>
Co-authored-by: Bernard Boulay <bernard.boulay@24i.com>
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
This is intended to be a more convenient replacement for
reordered_opaque.
Add support for it in the two encoders that offer
AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE: libx264 and libx265. Other
encoders will be supported in future commits.
This reverts commit dea673d0d5.
This change cannot work for several reasons, the most obvious ones are:
- the alpha is being part of the scoring of the color difference, even
though we can not interpret the alpha as part of the perception of the
color (we don't even know if it's premultiplied or postmultiplied)
- the colors are averaged with their alpha value which simply cannot
work
The command proposed in the original thread of the patch actually
produces a completely broken file:
ffmpeg -y -loglevel verbose -i fate-suite/apng/o_sample.png -filter_complex "split[split1][split2];[split1]palettegen=max_colors=254:use_alpha=1[pal1];[split2][pal1]paletteuse=use_alpha=1" -frames:v 1 out.png
We can see that many color pixels are off, but more importantly some
colors have a random alpha value: https://imgur.com/eFQ2UK7
I don't see any easy fix for this unfortunately, the approach appears to
be flawed by design.
New option can be used to avoid creating very short segments with inputs
whose GOP size is variable or unharmonic with segment_time.
Only effective with segment_time.
We shouldn't be providing links to unverified and non-FFmpeg-controlled
content in our documentation.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
We do not endorse or recommend specific third party servers or companies
that users' data will be funneled through.
It is also incorrectly describing how FFmpeg currently works.
Should have been part of 412922cc6f but was
missed.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Bring it up to date with current practice, as the current text does not
cover everything. Drop the reference to unprefixed exported symbols,
which do not exist anymore.
It describes a general development policy, not code formatting. It is
also not true, as these days we tend to prioritize correctness, safety,
and completeness over code size.
It is currently very out of touch with reality.
* declare we are using C99 fully, rather than C90 plus extensions
* mention our use of stdatomic.h
* mention forbidden C99 features, like VLAs and complex numbers
VP6 alpha in EA format is a second VP6 encoded video stream where only the Y
component is used and is interpreted as the alpha channel of the first VP6
stream. The alpha VP6 stream is muxed separately from the main VP6 stream, has
its own stream headers and packet headers. In theory the two streams might not
even have the same resolution (although most likely that is not something that
is seen or supported in the wild), but the format is capable of doing it.
Merged VP6 alpha (also known as the VP6A codec) means that a packet of the
video stream contains the corresponding packet of both VP6 substreams like
this:
{OffsetOfAlpha, DataPacket, AlphaDataPacket}
So data and alpha data of a frame is merged to a single packet, this is how VP6
video with alpha is muxed in FLV and SWF.
The first approach is more like how the demuxer sees data in the EA format,
unfortunately it is different to what the FLV or SWF format expects, so -
having no better place for it in the framework - I decided to do an optional
format conversion in the EA demuxer.
Signed-off-by: Marton Balint <cus@passwd.hu>
Add qsv_transcode example which shows how to use qsv to do hardware
accelerated transcoding, also show how to dynamically set encoding
parameters.
examples:
Normal usage:
qsv_transcode input.mp4 h264_qsv output.mp4 "g 60"
Dynamic setting usage:
qsv_transcode input.mp4 hevc_qsv output.mp4 "g 60 asyne_depth 1"
100 "g 120"
This command initializes codec with gop_size 60 and change it to
120 after 100 frames
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
As a result of a typo in the source code, this option was completely
non-functional. In order to fix it, without breaking the current default
behavior, explicitly change this default to 0.
This behavior is also consistent with how other scale filters behave by
default, so it's probably best to enshrine it anyways.
Drop the reference to directly committing code, because
- it is highly discouraged and very rarely done these days
- there is no good reason NOT to submit patches for review
Add a link to the development policy chapter.