Requires some extraneous top side and bottom front channels to be
defined.
According to STD-B59v2, the defined channel layout is:
- FL
- FR
- FC
- LFE1
- BL
- BR
- FLc
- FRc
- BC
- LFE2
- SiL
- SiR
- TpFL
- TpFR
- TpFC
- TpC
- TpBL
- TpBR
- TpSiL
- TpSiR
- TpBC
- BtFC
- BtFL
- BtFR
This utility helps avoid undefined behavior when doing things like
checking how much memory we need to allocate for an image before we have
allocated a buffer.
Signed-off-by: Brian Kim <bkkim@google.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Use opaque iteration state instead of the previous child class. This
mirrors similar changes done in lavf/lavc.
Deprecate the av_opt_child_class_next() API.
This allows for users who derive devices to set options for the
new device context they derive.
The main use case of this is to allow users to enable extensions
(such as surface drawing extensions) in Vulkan while deriving from
the device their frames are on. That way, users don't need to write
any initialization code themselves, since the Vulkan spec invalidates
mixing instances, physical devices and active devices.
Apart from Vulkan, other hwcontexts ignore the opts argument since they
don't support options at all (or in VAAPI and OpenCL's case, options are
currently only used for device selection, which device_derive overrides).
This will be used for AVCodecContext->profile. By specifying constants in the
encoders we won't have to use the common AVCodecContext options table and
different encoders can use the same profile name even with different values.
Signed-off-by: Marton Balint <cus@passwd.hu>
Both are codec properties and not encoder capabilities. The relevant
AVCodecDescriptor.props flags exist for this purpose.
Signed-off-by: James Almer <jamrial@gmail.com>
This is intended to replace the deprecated the AV_FRAME_DATA_QP_TABLE*
API and extend it to a wider range of codecs.
In the future, it may also be extended to support other encoding
parameters such as motion vectors.
Additional changes by Anton Khirnov <anton@khirnov.net> with suggestions
by Lynne <dev@lynne.ee>.
Signed-off-by: Juan De León <juandl@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This solves a huge oversight - it lets users reliably use their own
AVVulkanDeviceContext. Otherwise, the extensions supplied and enabled
are not discoverable by anything outside of hwcontext_vulkan.
Also clarifies that any user-supplied VkInstance must be at least 1.1.
bump minor version for DOVI sidedata, because added the dovi_meta.h
as lavu API part. Also update APIchanges.
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
Previously, there was no way to flush an encoder such that after
draining, the encoder could be used again. We generally suggested
that clients teardown and replace the encoder instance in these
situations. However, for at least some hardware encoders, the cost of
this tear down/replace cycle is very high, which can get in the way of
some use-cases - for example: segmented encoding with nvenc.
To help address that use case, we added support for calling
avcodec_flush_buffers() to nvenc and things worked in practice,
although it was not clearly documented as to whether this should work
or not. There was only one previous example of an encoder implementing
the flush callback (audiotoolboxenc) and it's unclear if that was
intentional or not. However, it was clear that calling
avocdec_flush_buffers() on any other encoder would leave the encoder in
an undefined state, and that's not great.
As part of cleaning this up, this change introduces a formal capability
flag for encoders that support flushing and ensures a flush call is a
no-op for any other encoder. This allows client code to check if it is
meaningful to call flush on an encoder before actually doing it.
I have not attempted to separate the steps taken inside
avcodec_flush_buffers() because it's not doing anything that's wrong
for an encoder. But I did add a sanity check to reject attempts to
flush a frame threaded encoder because I couldn't wrap my head around
whether that code path was actually safe or not. As this combination
doesn't exist today, we'll deal with it if it ever comes up.
This commit updates the documentation of av_read_frame() to match its
actual behaviour in several ways:
1. On success, av_read_frame() always returns refcounted packets.
2. It can handle uninitialized packets.
3. On error, it always returns blank packets.
This will allow callers to not initialize or unref unnecessarily.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Up until now, it was completely unspecified what the content of the
destination packet dst was on error. Depending upon where the error
happened calling av_packet_unref() on dst might be dangerous.
This commit changes this by making sure that dst is blank on error, so
unreferencing it again is safe (and still pointless). This behaviour is
documented.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Required minimal changes to the code so made sense to implement.
FFT and MDCT tested, the output of both was properly rounded.
Fun fact: the non-power-of-two fixed-point FFT and MDCT are the fastest ever
non-power-of-two fixed-point FFT and MDCT written.
This can replace the power of two integer MDCTs in aac and ac3 if the
MIPS optimizations are ported across.
Unfortunately the ac3 encoder uses a 16-bit fixed point forward transform,
unlike the encoder which uses a 32bit inverse transform, so some modifications
might be required there.
The 3-point FFT is somewhat less accurate than it otherwise could be,
having minor rounding errors with bigger transforms. However, this
could be improved later, and the way its currently written is the way one
would write assembly for it.
Similar rounding errors can also be found throughout the power of two FFTs
as well, though those are more difficult to correct.
Despite this, the integer transforms are more than accurate enough.
Compared to ad-hoc if(printed) ... code this allows the user to disable
it by adjusting the log level
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This commit adds the necessary code to initialize and use a Vulkan device
within the hwcontext libavutil framework.
Currently direct mapping to VAAPI and DRM frames is functional, and
transfers to CUDA and native frames are supported.
Lets hope the future Vulkan video decode extension fits well within this
framework.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Suggested-by: Hendrik Leppkes <h.leppkes@gmail.com>
Suggested-by: Nicolas George <george@nsup.org>
Signed-off-by: Steven Liu <lq@chinaffmpeg.org>
In order to access the original opaque parameter of a buffer in the buffer
pool. (The buffer pool implementation overrides the normal opaque parameter but
also saves it so it is accessible).
v2: add assertion check before dereferencing the BufferPoolEntry.
Signed-off-by: Marton Balint <cus@passwd.hu>
This is an alias for JEDEC P22.
The name associated with the value is also changed
from jedec-p22 to ebu3213 to match ITU-T H.273.
Signed-off-by: Raphaël Zumer <rzumer@tebako.net>
Signed-off-by: James Almer <jamrial@gmail.com>
These functions can be used to print a variable number of strings consecutively
to the IO context. Unlike av_bprintf, no temporary buffer is necessary.
Signed-off-by: Marton Balint <cus@passwd.hu>
Simply moves and templates the actual transforms to support an
additional data type.
Unlike the float version, which is equal or better than libfftw3f,
double precision output is bit identical with libfftw3.
FF_DECODE_ERROR_CONCEALMENT_ACTIVE is set when the decoded frame has error(s) but the returned value from
avcodec_receive_frame is zero i.e. concealed errors
Signed-off-by: Amir Pauker <amir@livelyvideo.tv>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The encoders such as libx264 support different QPs offset for different MBs,
it makes possible for ROI-based encoding. It makes sense to add support
within ffmpeg to generate/accept ROI infos and pass into encoders.
Typical usage: After AVFrame is decoded, a ffmpeg filter or user's code
generates ROI info for that frame, and the encoder finally does the
ROI-based encoding.
The ROI info is maintained as side data of AVFrame.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
This is needed because of 32bit float formats (which are difficult to
store in 16bits)
This also fixes undefined behavior found by fate
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The existing av_mediacodec_release_buffer allows the user to render
or discard the Surface-backed frame. This new method allows the user
to control exactly when the frame will be rendered to its SurfaceView.
Available since Android API 21.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Create a new AVPacket side data type for Active Format Description,
which mirrors the side data type found in AVFrame. The primary
use case for this is ensuring AFD gets preserved in the V210
encoder, so that the decklink libavdevice can output AFD.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
These fields will allow the mpegts demuxer to expose details about
the PMT/program which created the AVProgram and its AVStreams.
In mpegts, a PMT which advertises streams has a version number
which can be incremented at any time. When the version changes,
the pids which correspond to each of it's streams can also change.
Since ffmpeg creates a new AVStream per pid by default, an API user
needs the ability to (a) detect when the PMT changed, and (b) tell
which AVStream were added to replace earlier streams.
This has been a long-standing issue with ffmpeg's handling of mpegts
streams with PMT changes, and I found two related patches in the wild
that attempt to solve the same problem:
The first is in MythTV's ffmpeg fork, where they added a
void (*streams_changed)(void*); to AVFormatContext and call it from
their fork of the mpegts demuxer whenever the PMT changes.
The second was proposed by XBMC in
https://ffmpeg.org/pipermail/ffmpeg-devel/2012-December/135036.html,
where they created a new AVMEDIA_TYPE_DATA stream with id=0 and
attempted to send packets to it whenever the PMT changed.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Parses the video_stream_descriptor (H.222 2.6.2) to look
for the still_picture_flag. This is exposed to the user
via a new AV_DISPOSITION_STILL_IMAGE.
See for example https://tmm1.s3.amazonaws.com/music-choice.ts,
whose video stream only updates every ~6 seconds.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
PSEUDOPAL pixel formats are not paletted, but carried a palette with the
intention of allowing code to treat unpaletted formats as paletted. The
palette simply mapped the byte values to the resulting RGB values,
making it some sort of LUT for RGB conversion.
It was used for 1 byte formats only: RGB4_BYTE, BGR4_BYTE, RGB8, BGR8,
GRAY8. The first 4 are awfully obscure, used only by some ancient bitmap
formats. The last one, GRAY8, is more common, but its treatment is
grossly incorrect. It considers full range GRAY8 only, so GRAY8 coming
from typical Y video planes was not mapped to the correct RGB values.
This cannot be fixed, because AVFrame.color_range can be freely changed
at runtime, and there is nothing to ensure the pseudo palette is
updated.
Also, nothing actually used the PSEUDOPAL palette data, except xwdenc
(trivially changed in the previous commit). All other code had to treat
it as a special case, just to ignore or to propagate palette data.
In conclusion, this was just a very strange old mechnaism that has no
real justification to exist anymore (although it may have been nice and
useful in the past). Now it's an artifact that makes the API harder to
use: API users who allocate their own pixel data have to be aware that
they need to allocate the palette, or FFmpeg will crash on them in
_some_ situations. On top of this, there was no API to allocate the
pseuo palette outside of av_frame_get_buffer().
This patch not only deprecates AV_PIX_FMT_FLAG_PSEUDOPAL, but also makes
the pseudo palette optional. Nothing accesses it anymore, though if it's
set, it's propagated. It's still allocated and initialized for
compatibility with API users that rely on this feature. But new API
users do not need to allocate it. This was an explicit goal of this
patch.
Most changes replace AV_PIX_FMT_FLAG_PSEUDOPAL with FF_PSEUDOPAL. I
first tried #ifdefing all code, but it was a mess. The FF_PSEUDOPAL
macro reduces the mess, and still allows defining FF_API_PSEUDOPAL to 0.
Passes FATE with FF_API_PSEUDOPAL enabled and disabled. In addition,
FATE passes with FF_API_PSEUDOPAL set to 1, but with allocation
functions manually changed to not allocating a palette.
It works as a drop in replacement for the deprecated av_dup_packet(),
to ensure a packet is reference counted.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: James Almer <jamrial@gmail.com>
This reverts commit 0fd475704e.
Revert "lavd: fix iterating of input and output devices"
This reverts commit ce1d77a5e7.
Signed-off-by: Josh de Kock <josh@itanimul.li>
This is for applications which want to explicitly check for invalid
UTF-8 manually, and take actions that are better than dropping invalid
subtitles silently. (It's pretty much silent because sporadic avcodec
error messages are so common that you can't reasonably display them in a
prominent and meaningful way in a application GUI.)
This new side-data will contain info on how a packet is encrypted.
This allows the app to handle packet decryption.
Signed-off-by: Jacob Trimble <modmaker@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This adds a way for an API user to transfer QP data and metadata without
having to keep the reference to AVFrame, and without having to
explicitly care about QP APIs. It might also provide a way to finally
remove the deprecated QP related fields. In the end, the QP table should
be handled in a very similar way to e.g. AV_FRAME_DATA_MOTION_VECTORS.
There are two side data types, because I didn't care about having to
repack the QP data so the table and the metadata are in a single
AVBufferRef. Otherwise it would have either required a copy on decoding
(extra slowdown for something as obscure as the QP data), or would have
required making intrusive changes to the codecs which support export of
this data.
The new side data types are added under deprecation guards, because I
don't intend to change the status of the QP export as being deprecated
(as it was before this patch too).
The default behavior of the mediacodec decoder before this commit
was to delay flushes until all pending hardware frames were
returned to the decoder. This was useful for certain types of
applications, but was unexpected behavior for others.
The new default behavior with this commit is now to execute
flushes immediately to invalidate all pending frames. The old
behavior can be enabled by setting delay_flush=1.
With the new behavior, video players implementing seek can simply
call flush on the decoder without having to worry about whether
they have one or more mediacodec frames still buffered in their
rendering pipeline. Previously, all these frames had to be
explictly freed (or rendered) before the seek/flush would execute.
The new behavior matches the behavior of all other lavc decoders,
reducing the amount of special casing required when using the
mediacodec decoder.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Signed-off-by: Matthieu Bouron <matthieu.bouron@gmail.com>
* commit '6d86cef06ba36c0ed591e14a2382e9630059fc5d':
lavfi: Add support for increasing hardware frame pool sizes
Merged-by: Mark Thompson <sw@jkqxz.net>
* commit '5b145290df2998a9836a93eb925289c6c8b63af0':
lavc: Add support for increasing hardware frame pool sizes
Merged-by: Mark Thompson <sw@jkqxz.net>
AVCodecContext.extra_hw_frames is added to the size of hardware frame
pools created by libavcodec for APIs which require fixed-size pools.
This allows the user to keep references to a greater number of frames
after decode, which may be necessary for some use-cases.
It is also added to the initial_pool_size value returned by
avcodec_get_hw_frames_parameters() if a fixed-size pool is required.
avcodec bump missed in 7e8eba2d87
avformat bump missed in ff46124b0d and
0694d87024
avdevice bump missed in 0fd475704e
Signed-off-by: James Almer <jamrial@gmail.com>
The "timeout" option name inherently clashes with the meaning of the
HTTP libavformat protocol option with the same name. Rename it after a
deprecation period to make it compatible with the HTTP one.
This will replace the 1024 character limited filename field. Compatiblity for
output contexts are provided by copying filename field to URL if URL is unset
and by providing an internal function for muxers to set both url and filename
at once.
Signed-off-by: Marton Balint <cus@passwd.hu>
The seek function can just return an error if seeking is unavailable,
but often this is too late. Add a flag that signals that the stream is
unseekable, and use it in HLS.
It was sort of optional before - if you didn't call it, networking was
initialized on demand, and an ugly warning was logged. Also, the doxygen
comments threatened that it would be made strictly required one day.
Make it explicitly optional. I would prefer to deprecate it fully, but
there might still be legitimate reasons to use this. But the average
user won't need it.
This is needed only for two reasons: to initialize TLS libraries like
OpenSSL and GnuTLS, and winsock.
OpenSSL and GnuTLS were already silently initialized on demand if the
global network init function was not called. They also have various
thread-safety acrobatics, which make concurrent initialization within
libavformat safe. In addition, the libraries are moving towards making
their global init functions safe, which removes all need for central
global init. In particular, GnuTLS 3.5.16 and OpenSSL 1.1.0g have been
found to have safe init functions. In all cases, they use internal
reference counters to avoid that the global uninit functions interfere
with concurrent uses of the library by other API users who called global
init.
winsock should be thread-safe as well, and maintains an internal
reference counter as well.
Since we still support ancient TLS libraries, which do not have this
fixed, and since it's unknown whether winsock and GnuTLS
reinitialization is costly in any way, don't deprecate the libavformat
functions yet.
Deprecate the entire library. Merged years ago to provide compatibility
with Libav, it remained unmaintained by the FFmpeg project and duplicated
functionality provided by libswresample.
In order to improve consistency and reduce attack surface, as well as to ease
burden on maintainers, it has been deprecated. Users of this library are asked
to migrate to libswresample, which, as well as providing more functionality,
is faster and has higher accuracy.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
Use static mutexes instead of requiring a lock manager. The behavior
should be roughly the same before and after this change for API users
which did not set the lock manager at all (except that a minor memory
leak disappears).
Explicitly identify decoder/encoder wrappers with a common name. This
saves API users from guessing by the name suffix. For example, they
don't have to guess that "h264_qsv" is the h264 QSV implementation, and
instead they can just check the AVCodec .codec and .wrapper_name fields.
Explicitly mark AVCodec entries that are hardware decoders or most
likely hardware decoders with new AV_CODEC_CAPs. The purpose is allowing
API users listing hardware decoders in a more generic way. The proposed
AVCodecHWConfig does not provide this information fully, because it's
concerned with decoder configuration, not information about the fact
whether the hardware is used or not.
AV_CODEC_CAP_HYBRID exists specifically for QSV, which can have software
implementations in case the hardware is not capable.
Based on a patch by Philip Langdale <philipl@overt.org>.
Merges Libav commit 47687a2f8a.
This was added in early 2013 and abandoned several months later; as far as
I can tell, there are no external users. Future OpenCL use will be via
hwcontext, which requires neither special OpenCL-only API nor global state
in libavutil.
All internal users are also deleted - this is just the unsharp filter
(replaced by unsharp_opencl, which is more flexible) and the deshake filter
(no replacement).
* commit 'b46a77f19ddc4b2b5fa3187835ceb602a5244e24':
lavc: external hardware frame pool initialization
Includes the fix from e724bdfffb
Merged-by: James Almer <jamrial@gmail.com>
This adds a new API, which allows the API user to query the required
AVHWFramesContext parameters. This also reduces code duplication across
the hwaccels by introducing ff_decode_get_hw_frames_ctx(), which uses
the new API function. It takes care of initializing the hw_frames_ctx
if needed, and does additional error handling and API usage checking.
Support for VDA and Cuvid missing.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
* commit 'e6bff23f1e11aefb16a2b5d6ee72bf7469c5a66e':
cpu: add a function for querying maximum required data alignment
Adapted to work with the arbitrary runtime cpuflag changes av_force_cpu_flags()
can generate.
Merged-by: James Almer <jamrial@gmail.com>
This flag replaces the deprecated, non-prefixed HWACCEL_CODEC_CAP_EXPERIMENTAL
one.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: James Almer <jamrial@gmail.com>
Main use-case is proxying avio through a foreign I/O layer and a custom
AVIO context, without losing latency and performance characteristics.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Merged from Libav commit 173b56218f.
Before this commit, AVIOContext is to be freed with a plain av_free(),
which prevents us from adding any deeper structure to it.
(cherry picked from commit 99684f3ae7)
Signed-off-by: James Almer <jamrial@gmail.com>
Main use-case is proxying avio through a foreign I/O layer and a custom
AVIO context, without losing latency and performance characteristics.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Black isn't always just memset(ptr, 0, size). Limited YUV in particular
requires relatively non-obvious values, and filling a frame with
repeating 0 bytes is disallowed in some contexts. With component sizes
larger than 8 or packed YUV, this can become relatively complicated. So
having a generic function for this seems helpful.
In order to handle the complex cases in a generic way without destroying
performance, this code attempts to compute a black pixel, and then uses
that value to clear the image data quickly by using a function like
memset.
Common cases like yuv410p10 or rgba can't be handled with a simple
memset, so there is some code to fill memory with 2/4/8 byte patterns.
For the remaining cases, a generic slow fallback is used.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Merged from Libav commit 45df7adc1d.