This also allows to exclusively use pointers to const AVCodec in
fftools/ffmpeg_opt.c.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Deprecated in c29038f304.
The resample filter based upon this library has been removed as well.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Signed-off-by: James Almer <jamrial@gmail.com>
av_adler32_update() is used by av_hash_update() which will be switched
to size_t at the next bump. So it also has to be made to use size_t.
This is also necessary for framecrcenc.c, because the size of side data
will become a size_t, too.
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
64 bits are needed in order to retain the uid values of Matroska
chapters; the type is kept signed because the semantics of NUT chapters
depend upon whether the id is > 0 or < 0.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
This cap is currently used to mark multithreading-capable codecs that
wrap external libraries with their own multithreading code. The name is
highly confusing for our API users, since libavcodec ALWAYS handles
thread_count=0 (see commit message in previous commit). Therefore rename
the cap and update its documentation to make its meaning clear.
The old name is kept deprecated until next+1 major bump.
This callback is functionally the same as get_buffer2() is for decoders, and
implements for the new encode API the functionality of the old encode API had
where the user could provide their own buffers.
Reviewed-by: Lynne <dev@lynne.ee>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: James Almer <jamrial@gmail.com>
av_stream_add_side_data() already defines size as a size_t, so this makes it
consistent across all side data functions.
Signed-off-by: James Almer <jamrial@gmail.com>
av_packet_add_side_data() already defines size as a size_t, so this makes it
consistent across all side data functions
Signed-off-by: James Almer <jamrial@gmail.com>
Enables the usage of such values as AV_EF_EXPLODE in encoders, which
can be useful in cases such as subtitle encoders where they have the
responsibility to validate the correctness of an incoming ASS dialog line.
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
This flag was added in 492026209b
in conjunction with av_demuxer_open() to allow to pass private
options to demuxers. It worked as follows: av_open_input_stream()
(the predecessor of avformat_open_input()) would not call the
read_header function if this flag is set. Instead the user could set
private options of the demuxer via the format's private class after
avformat_open_input() and then call av_demuxer_open() which called
the format's read_header function.
This approach was abandoned in e37f161e66
and av_demuxer_open() deprecated; instead the AVDictionary based way of
passing private options to the demuxer was choosen. Yet
AVFMT_FLAG_PRIV_OPT has never been deprecated and av_demuxer_open()
never removed. This commit implements the deprecation of the flag and
schedules av_demuxer_open for removal on the next major bump.
Given that av_demuxer_open() has been deprecated in 2012 and that this
flag is useless without it, the flag will be ignored after the next
major version bump.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
AVFrame hasn't been a struct defined in libavcodec for a decade now, when
it was moved to libavutil.
Found-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
This commit adds support for in-place FFT transforms. Since our
internal transforms were all in-place anyway, this only changes
the permutation on the input.
Unfortunately, research papers were of no help here. All focused
on dry hardware implementations, where permutes are free, or on
software implementations where binary bloat is of no concern so
storing dozen times the transforms for each permutation and version
is not considered bad practice.
Still, for a pure C implementation, it's only around 28% slower
than the multi-megabyte FFTW3 in unaligned mode.
Unlike a closed permutation like with PFA, split-radix FFT bit-reversals
contain multiple NOPs, multiple simple swaps, and a few chained swaps,
so regular single-loop single-state permute loops were not possible.
Instead, we filter out parts of the input indices which are redundant.
This allows for a single branch, and with some clever AVX512 asm,
could possibly be SIMD'd without refactoring.
The inplace_idx array is guaranteed to never be larger than the
revtab array, and in practice only requires around log2(len) entries.
The power-of-two MDCTs can be done in-place as well. And it's
possible to eliminate a copy in the compound MDCTs too, however
it'll be slower than doing them out of place, and we'd need to dirty
the input array.
It has been added in 6db42a2b6b,
yet since then none of the necessary create/free_device_capabilities
functions has been implemented, making this API completely useless.
Because of this one can already simplify
avdevice_capabilities_free/create and can already remove the function
pointers at the next major bump; given that the documentation explicitly
states that av_device_capabilities is not to be used by a user, it's
options can already be removed (save for the sentinel).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Do it only when requested with the AV_CODEC_EXPORT_DATA_VIDEO_ENC_PARAMS
flag.
Drop previous code using the long-deprecated AV_FRAME_DATA_QP_TABLE*
API. Temporarily disable fate-filter-pp, fate-filter-pp7,
fate-filter-spp. They will be reenabled once these filters are converted
in following commits.
They add considerable complexity to frame-threading implementation,
which includes an unavoidably leaking error path, while the advantages
of this option to the users are highly dubious.
It should be always possible and desirable for the callers to make their
get_buffer2() implementation thread-safe, so deprecate this option.
This patch introduces a new frame side data type AVFilmGrainParams for use
with video codecs which support it.
It can save a lot of memory used for duplicate processed reference frames and
reduce copies when applying film grain during presentation.
A common pattern e.g. in libavcodec is replacing/updating buffer
references: unref old one, ref new one. This function allows simplifying
such code and avoiding unnecessary refs+unrefs if the references are
already equivalent.
Requires some extraneous top side and bottom front channels to be
defined.
According to STD-B59v2, the defined channel layout is:
- FL
- FR
- FC
- LFE1
- BL
- BR
- FLc
- FRc
- BC
- LFE2
- SiL
- SiR
- TpFL
- TpFR
- TpFC
- TpC
- TpBL
- TpBR
- TpSiL
- TpSiR
- TpBC
- BtFC
- BtFL
- BtFR
This utility helps avoid undefined behavior when doing things like
checking how much memory we need to allocate for an image before we have
allocated a buffer.
Signed-off-by: Brian Kim <bkkim@google.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Use opaque iteration state instead of the previous child class. This
mirrors similar changes done in lavf/lavc.
Deprecate the av_opt_child_class_next() API.
This allows for users who derive devices to set options for the
new device context they derive.
The main use case of this is to allow users to enable extensions
(such as surface drawing extensions) in Vulkan while deriving from
the device their frames are on. That way, users don't need to write
any initialization code themselves, since the Vulkan spec invalidates
mixing instances, physical devices and active devices.
Apart from Vulkan, other hwcontexts ignore the opts argument since they
don't support options at all (or in VAAPI and OpenCL's case, options are
currently only used for device selection, which device_derive overrides).
This will be used for AVCodecContext->profile. By specifying constants in the
encoders we won't have to use the common AVCodecContext options table and
different encoders can use the same profile name even with different values.
Signed-off-by: Marton Balint <cus@passwd.hu>
Both are codec properties and not encoder capabilities. The relevant
AVCodecDescriptor.props flags exist for this purpose.
Signed-off-by: James Almer <jamrial@gmail.com>
This is intended to replace the deprecated the AV_FRAME_DATA_QP_TABLE*
API and extend it to a wider range of codecs.
In the future, it may also be extended to support other encoding
parameters such as motion vectors.
Additional changes by Anton Khirnov <anton@khirnov.net> with suggestions
by Lynne <dev@lynne.ee>.
Signed-off-by: Juan De León <juandl@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This solves a huge oversight - it lets users reliably use their own
AVVulkanDeviceContext. Otherwise, the extensions supplied and enabled
are not discoverable by anything outside of hwcontext_vulkan.
Also clarifies that any user-supplied VkInstance must be at least 1.1.
bump minor version for DOVI sidedata, because added the dovi_meta.h
as lavu API part. Also update APIchanges.
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
Previously, there was no way to flush an encoder such that after
draining, the encoder could be used again. We generally suggested
that clients teardown and replace the encoder instance in these
situations. However, for at least some hardware encoders, the cost of
this tear down/replace cycle is very high, which can get in the way of
some use-cases - for example: segmented encoding with nvenc.
To help address that use case, we added support for calling
avcodec_flush_buffers() to nvenc and things worked in practice,
although it was not clearly documented as to whether this should work
or not. There was only one previous example of an encoder implementing
the flush callback (audiotoolboxenc) and it's unclear if that was
intentional or not. However, it was clear that calling
avocdec_flush_buffers() on any other encoder would leave the encoder in
an undefined state, and that's not great.
As part of cleaning this up, this change introduces a formal capability
flag for encoders that support flushing and ensures a flush call is a
no-op for any other encoder. This allows client code to check if it is
meaningful to call flush on an encoder before actually doing it.
I have not attempted to separate the steps taken inside
avcodec_flush_buffers() because it's not doing anything that's wrong
for an encoder. But I did add a sanity check to reject attempts to
flush a frame threaded encoder because I couldn't wrap my head around
whether that code path was actually safe or not. As this combination
doesn't exist today, we'll deal with it if it ever comes up.
This commit updates the documentation of av_read_frame() to match its
actual behaviour in several ways:
1. On success, av_read_frame() always returns refcounted packets.
2. It can handle uninitialized packets.
3. On error, it always returns blank packets.
This will allow callers to not initialize or unref unnecessarily.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Up until now, it was completely unspecified what the content of the
destination packet dst was on error. Depending upon where the error
happened calling av_packet_unref() on dst might be dangerous.
This commit changes this by making sure that dst is blank on error, so
unreferencing it again is safe (and still pointless). This behaviour is
documented.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Required minimal changes to the code so made sense to implement.
FFT and MDCT tested, the output of both was properly rounded.
Fun fact: the non-power-of-two fixed-point FFT and MDCT are the fastest ever
non-power-of-two fixed-point FFT and MDCT written.
This can replace the power of two integer MDCTs in aac and ac3 if the
MIPS optimizations are ported across.
Unfortunately the ac3 encoder uses a 16-bit fixed point forward transform,
unlike the encoder which uses a 32bit inverse transform, so some modifications
might be required there.
The 3-point FFT is somewhat less accurate than it otherwise could be,
having minor rounding errors with bigger transforms. However, this
could be improved later, and the way its currently written is the way one
would write assembly for it.
Similar rounding errors can also be found throughout the power of two FFTs
as well, though those are more difficult to correct.
Despite this, the integer transforms are more than accurate enough.
Compared to ad-hoc if(printed) ... code this allows the user to disable
it by adjusting the log level
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This commit adds the necessary code to initialize and use a Vulkan device
within the hwcontext libavutil framework.
Currently direct mapping to VAAPI and DRM frames is functional, and
transfers to CUDA and native frames are supported.
Lets hope the future Vulkan video decode extension fits well within this
framework.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Suggested-by: Hendrik Leppkes <h.leppkes@gmail.com>
Suggested-by: Nicolas George <george@nsup.org>
Signed-off-by: Steven Liu <lq@chinaffmpeg.org>
In order to access the original opaque parameter of a buffer in the buffer
pool. (The buffer pool implementation overrides the normal opaque parameter but
also saves it so it is accessible).
v2: add assertion check before dereferencing the BufferPoolEntry.
Signed-off-by: Marton Balint <cus@passwd.hu>
This is an alias for JEDEC P22.
The name associated with the value is also changed
from jedec-p22 to ebu3213 to match ITU-T H.273.
Signed-off-by: Raphaël Zumer <rzumer@tebako.net>
Signed-off-by: James Almer <jamrial@gmail.com>
These functions can be used to print a variable number of strings consecutively
to the IO context. Unlike av_bprintf, no temporary buffer is necessary.
Signed-off-by: Marton Balint <cus@passwd.hu>
Simply moves and templates the actual transforms to support an
additional data type.
Unlike the float version, which is equal or better than libfftw3f,
double precision output is bit identical with libfftw3.
FF_DECODE_ERROR_CONCEALMENT_ACTIVE is set when the decoded frame has error(s) but the returned value from
avcodec_receive_frame is zero i.e. concealed errors
Signed-off-by: Amir Pauker <amir@livelyvideo.tv>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The encoders such as libx264 support different QPs offset for different MBs,
it makes possible for ROI-based encoding. It makes sense to add support
within ffmpeg to generate/accept ROI infos and pass into encoders.
Typical usage: After AVFrame is decoded, a ffmpeg filter or user's code
generates ROI info for that frame, and the encoder finally does the
ROI-based encoding.
The ROI info is maintained as side data of AVFrame.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
This is needed because of 32bit float formats (which are difficult to
store in 16bits)
This also fixes undefined behavior found by fate
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The existing av_mediacodec_release_buffer allows the user to render
or discard the Surface-backed frame. This new method allows the user
to control exactly when the frame will be rendered to its SurfaceView.
Available since Android API 21.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Create a new AVPacket side data type for Active Format Description,
which mirrors the side data type found in AVFrame. The primary
use case for this is ensuring AFD gets preserved in the V210
encoder, so that the decklink libavdevice can output AFD.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
These fields will allow the mpegts demuxer to expose details about
the PMT/program which created the AVProgram and its AVStreams.
In mpegts, a PMT which advertises streams has a version number
which can be incremented at any time. When the version changes,
the pids which correspond to each of it's streams can also change.
Since ffmpeg creates a new AVStream per pid by default, an API user
needs the ability to (a) detect when the PMT changed, and (b) tell
which AVStream were added to replace earlier streams.
This has been a long-standing issue with ffmpeg's handling of mpegts
streams with PMT changes, and I found two related patches in the wild
that attempt to solve the same problem:
The first is in MythTV's ffmpeg fork, where they added a
void (*streams_changed)(void*); to AVFormatContext and call it from
their fork of the mpegts demuxer whenever the PMT changes.
The second was proposed by XBMC in
https://ffmpeg.org/pipermail/ffmpeg-devel/2012-December/135036.html,
where they created a new AVMEDIA_TYPE_DATA stream with id=0 and
attempted to send packets to it whenever the PMT changed.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Parses the video_stream_descriptor (H.222 2.6.2) to look
for the still_picture_flag. This is exposed to the user
via a new AV_DISPOSITION_STILL_IMAGE.
See for example https://tmm1.s3.amazonaws.com/music-choice.ts,
whose video stream only updates every ~6 seconds.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
PSEUDOPAL pixel formats are not paletted, but carried a palette with the
intention of allowing code to treat unpaletted formats as paletted. The
palette simply mapped the byte values to the resulting RGB values,
making it some sort of LUT for RGB conversion.
It was used for 1 byte formats only: RGB4_BYTE, BGR4_BYTE, RGB8, BGR8,
GRAY8. The first 4 are awfully obscure, used only by some ancient bitmap
formats. The last one, GRAY8, is more common, but its treatment is
grossly incorrect. It considers full range GRAY8 only, so GRAY8 coming
from typical Y video planes was not mapped to the correct RGB values.
This cannot be fixed, because AVFrame.color_range can be freely changed
at runtime, and there is nothing to ensure the pseudo palette is
updated.
Also, nothing actually used the PSEUDOPAL palette data, except xwdenc
(trivially changed in the previous commit). All other code had to treat
it as a special case, just to ignore or to propagate palette data.
In conclusion, this was just a very strange old mechnaism that has no
real justification to exist anymore (although it may have been nice and
useful in the past). Now it's an artifact that makes the API harder to
use: API users who allocate their own pixel data have to be aware that
they need to allocate the palette, or FFmpeg will crash on them in
_some_ situations. On top of this, there was no API to allocate the
pseuo palette outside of av_frame_get_buffer().
This patch not only deprecates AV_PIX_FMT_FLAG_PSEUDOPAL, but also makes
the pseudo palette optional. Nothing accesses it anymore, though if it's
set, it's propagated. It's still allocated and initialized for
compatibility with API users that rely on this feature. But new API
users do not need to allocate it. This was an explicit goal of this
patch.
Most changes replace AV_PIX_FMT_FLAG_PSEUDOPAL with FF_PSEUDOPAL. I
first tried #ifdefing all code, but it was a mess. The FF_PSEUDOPAL
macro reduces the mess, and still allows defining FF_API_PSEUDOPAL to 0.
Passes FATE with FF_API_PSEUDOPAL enabled and disabled. In addition,
FATE passes with FF_API_PSEUDOPAL set to 1, but with allocation
functions manually changed to not allocating a palette.
It works as a drop in replacement for the deprecated av_dup_packet(),
to ensure a packet is reference counted.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: James Almer <jamrial@gmail.com>
This reverts commit 0fd475704e.
Revert "lavd: fix iterating of input and output devices"
This reverts commit ce1d77a5e7.
Signed-off-by: Josh de Kock <josh@itanimul.li>
This is for applications which want to explicitly check for invalid
UTF-8 manually, and take actions that are better than dropping invalid
subtitles silently. (It's pretty much silent because sporadic avcodec
error messages are so common that you can't reasonably display them in a
prominent and meaningful way in a application GUI.)
This new side-data will contain info on how a packet is encrypted.
This allows the app to handle packet decryption.
Signed-off-by: Jacob Trimble <modmaker@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This adds a way for an API user to transfer QP data and metadata without
having to keep the reference to AVFrame, and without having to
explicitly care about QP APIs. It might also provide a way to finally
remove the deprecated QP related fields. In the end, the QP table should
be handled in a very similar way to e.g. AV_FRAME_DATA_MOTION_VECTORS.
There are two side data types, because I didn't care about having to
repack the QP data so the table and the metadata are in a single
AVBufferRef. Otherwise it would have either required a copy on decoding
(extra slowdown for something as obscure as the QP data), or would have
required making intrusive changes to the codecs which support export of
this data.
The new side data types are added under deprecation guards, because I
don't intend to change the status of the QP export as being deprecated
(as it was before this patch too).
The default behavior of the mediacodec decoder before this commit
was to delay flushes until all pending hardware frames were
returned to the decoder. This was useful for certain types of
applications, but was unexpected behavior for others.
The new default behavior with this commit is now to execute
flushes immediately to invalidate all pending frames. The old
behavior can be enabled by setting delay_flush=1.
With the new behavior, video players implementing seek can simply
call flush on the decoder without having to worry about whether
they have one or more mediacodec frames still buffered in their
rendering pipeline. Previously, all these frames had to be
explictly freed (or rendered) before the seek/flush would execute.
The new behavior matches the behavior of all other lavc decoders,
reducing the amount of special casing required when using the
mediacodec decoder.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Signed-off-by: Matthieu Bouron <matthieu.bouron@gmail.com>
* commit '6d86cef06ba36c0ed591e14a2382e9630059fc5d':
lavfi: Add support for increasing hardware frame pool sizes
Merged-by: Mark Thompson <sw@jkqxz.net>
* commit '5b145290df2998a9836a93eb925289c6c8b63af0':
lavc: Add support for increasing hardware frame pool sizes
Merged-by: Mark Thompson <sw@jkqxz.net>
AVCodecContext.extra_hw_frames is added to the size of hardware frame
pools created by libavcodec for APIs which require fixed-size pools.
This allows the user to keep references to a greater number of frames
after decode, which may be necessary for some use-cases.
It is also added to the initial_pool_size value returned by
avcodec_get_hw_frames_parameters() if a fixed-size pool is required.
avcodec bump missed in 7e8eba2d87
avformat bump missed in ff46124b0d and
0694d87024
avdevice bump missed in 0fd475704e
Signed-off-by: James Almer <jamrial@gmail.com>
The "timeout" option name inherently clashes with the meaning of the
HTTP libavformat protocol option with the same name. Rename it after a
deprecation period to make it compatible with the HTTP one.
This will replace the 1024 character limited filename field. Compatiblity for
output contexts are provided by copying filename field to URL if URL is unset
and by providing an internal function for muxers to set both url and filename
at once.
Signed-off-by: Marton Balint <cus@passwd.hu>
The seek function can just return an error if seeking is unavailable,
but often this is too late. Add a flag that signals that the stream is
unseekable, and use it in HLS.
It was sort of optional before - if you didn't call it, networking was
initialized on demand, and an ugly warning was logged. Also, the doxygen
comments threatened that it would be made strictly required one day.
Make it explicitly optional. I would prefer to deprecate it fully, but
there might still be legitimate reasons to use this. But the average
user won't need it.
This is needed only for two reasons: to initialize TLS libraries like
OpenSSL and GnuTLS, and winsock.
OpenSSL and GnuTLS were already silently initialized on demand if the
global network init function was not called. They also have various
thread-safety acrobatics, which make concurrent initialization within
libavformat safe. In addition, the libraries are moving towards making
their global init functions safe, which removes all need for central
global init. In particular, GnuTLS 3.5.16 and OpenSSL 1.1.0g have been
found to have safe init functions. In all cases, they use internal
reference counters to avoid that the global uninit functions interfere
with concurrent uses of the library by other API users who called global
init.
winsock should be thread-safe as well, and maintains an internal
reference counter as well.
Since we still support ancient TLS libraries, which do not have this
fixed, and since it's unknown whether winsock and GnuTLS
reinitialization is costly in any way, don't deprecate the libavformat
functions yet.
Deprecate the entire library. Merged years ago to provide compatibility
with Libav, it remained unmaintained by the FFmpeg project and duplicated
functionality provided by libswresample.
In order to improve consistency and reduce attack surface, as well as to ease
burden on maintainers, it has been deprecated. Users of this library are asked
to migrate to libswresample, which, as well as providing more functionality,
is faster and has higher accuracy.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
Use static mutexes instead of requiring a lock manager. The behavior
should be roughly the same before and after this change for API users
which did not set the lock manager at all (except that a minor memory
leak disappears).
Explicitly identify decoder/encoder wrappers with a common name. This
saves API users from guessing by the name suffix. For example, they
don't have to guess that "h264_qsv" is the h264 QSV implementation, and
instead they can just check the AVCodec .codec and .wrapper_name fields.
Explicitly mark AVCodec entries that are hardware decoders or most
likely hardware decoders with new AV_CODEC_CAPs. The purpose is allowing
API users listing hardware decoders in a more generic way. The proposed
AVCodecHWConfig does not provide this information fully, because it's
concerned with decoder configuration, not information about the fact
whether the hardware is used or not.
AV_CODEC_CAP_HYBRID exists specifically for QSV, which can have software
implementations in case the hardware is not capable.
Based on a patch by Philip Langdale <philipl@overt.org>.
Merges Libav commit 47687a2f8a.
This was added in early 2013 and abandoned several months later; as far as
I can tell, there are no external users. Future OpenCL use will be via
hwcontext, which requires neither special OpenCL-only API nor global state
in libavutil.
All internal users are also deleted - this is just the unsharp filter
(replaced by unsharp_opencl, which is more flexible) and the deshake filter
(no replacement).
* commit 'b46a77f19ddc4b2b5fa3187835ceb602a5244e24':
lavc: external hardware frame pool initialization
Includes the fix from e724bdfffb
Merged-by: James Almer <jamrial@gmail.com>
This adds a new API, which allows the API user to query the required
AVHWFramesContext parameters. This also reduces code duplication across
the hwaccels by introducing ff_decode_get_hw_frames_ctx(), which uses
the new API function. It takes care of initializing the hw_frames_ctx
if needed, and does additional error handling and API usage checking.
Support for VDA and Cuvid missing.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
* commit 'e6bff23f1e11aefb16a2b5d6ee72bf7469c5a66e':
cpu: add a function for querying maximum required data alignment
Adapted to work with the arbitrary runtime cpuflag changes av_force_cpu_flags()
can generate.
Merged-by: James Almer <jamrial@gmail.com>
This flag replaces the deprecated, non-prefixed HWACCEL_CODEC_CAP_EXPERIMENTAL
one.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: James Almer <jamrial@gmail.com>
Main use-case is proxying avio through a foreign I/O layer and a custom
AVIO context, without losing latency and performance characteristics.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Merged from Libav commit 173b56218f.
Before this commit, AVIOContext is to be freed with a plain av_free(),
which prevents us from adding any deeper structure to it.
(cherry picked from commit 99684f3ae7)
Signed-off-by: James Almer <jamrial@gmail.com>
Main use-case is proxying avio through a foreign I/O layer and a custom
AVIO context, without losing latency and performance characteristics.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Black isn't always just memset(ptr, 0, size). Limited YUV in particular
requires relatively non-obvious values, and filling a frame with
repeating 0 bytes is disallowed in some contexts. With component sizes
larger than 8 or packed YUV, this can become relatively complicated. So
having a generic function for this seems helpful.
In order to handle the complex cases in a generic way without destroying
performance, this code attempts to compute a black pixel, and then uses
that value to clear the image data quickly by using a function like
memset.
Common cases like yuv410p10 or rgba can't be handled with a simple
memset, so there is some code to fill memory with 2/4/8 byte patterns.
For the remaining cases, a generic slow fallback is used.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Merged from Libav commit 45df7adc1d.
Black isn't always just memset(ptr, 0, size). Limited YUV in particular
requires relatively non-obvious values, and filling a frame with
repeating 0 bytes is disallowed in some contexts. With component sizes
larger than 8 or packed YUV, this can become relatively complicated. So
having a generic function for this seems helpful.
In order to handle the complex cases in a generic way without destroying
performance, this code attempts to compute a black pixel, and then uses
that value to clear the image data quickly by using a function like
memset.
Common cases like yuv410p10 or rgba can't be handled with a simple
memset, so there is some code to fill memory with 2/4/8 byte patterns.
For the remaining cases, a generic slow fallback is used.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This also adds support to avconv (which is trivial due to the new
hwaccel API being generic enough).
The new decoder setup code in dxva2.c is significantly based on work by
Steve Lhomme <robux4@gmail.com>, but with heavy changes/rewrites.
Merges Libav commit f9e7a2f95a.
Also adds untested VP9 support.
The check for DXVA2 COBJs is removed. Just update your MinGW to
something newer than a 5 year old release.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
To be used with the new d3d11 hwaccel decode API.
With the new hwaccel API, we don't want surfaces to depend on the
decoder (other than the required dimension and format). The old D3D11VA
pixfmt uses ID3D11VideoDecoderOutputView pointers, which include the
decoder configuration, and thus is incompatible with the new hwaccel
API. This patch introduces AV_PIX_FMT_D3D11, which uses ID3D11Texture2D
and an index. It's simpler and compatible with the new hwaccel API.
The introduced hwcontext supports only the new pixfmt.
Frame upload code untested.
Significantly based on work by Steve Lhomme <robux4@gmail.com>, but with
heavy changes/rewrites.
Merges Libav commit fff90422d1.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
Use the flags argument of av_hwframe_ctx_create_derived() to pass the
mapping flags which will be used on allocation. Also, set the format
and hardware context on the allocated frame automatically - the user
should not be required to do this themselves.
(cherry picked from commit c5714b51aa)
Adds functions to convert to/from strings and a function to iterate
over all supported device types. Also adds a new invalid type
AV_HWDEVICE_TYPE_NONE, which acts as a sentinel value.
(cherry picked from commit b7487f4f3c)
This also adds support to avconv (which is trivial due to the new
hwaccel API being generic enough).
The new decoder setup code in dxva2.c is significantly based on work by
Steve Lhomme <robux4@gmail.com>, but with heavy changes/rewrites.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
To be used with the new d3d11 hwaccel decode API.
With the new hwaccel API, we don't want surfaces to depend on the
decoder (other than the required dimension and format). The old D3D11VA
pixfmt uses ID3D11VideoDecoderOutputView pointers, which include the
decoder configuration, and thus is incompatible with the new hwaccel
API. This patch introduces AV_PIX_FMT_D3D11, which uses ID3D11Texture2D
and an index. It's simpler and compatible with the new hwaccel API.
The introduced hwcontext supports only the new pixfmt.
Frame upload code untested.
Significantly based on work by Steve Lhomme <robux4@gmail.com>, but with
heavy changes/rewrites.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
This adds tons of code for no other benefit than making VideoToolbox
support conform with the new hwaccel API (using hw_device_ctx and
hw_frames_ctx).
Since VideoToolbox decoding does not actually require the user to
allocate frames, the new code does mostly nothing.
One benefit is that ffmpeg_videotoolbox.c can be dropped once generic
hwaccel support for ffmpeg.c is merged from Libav.
Does not consider VDA or VideoToolbox encoding.
Fun fact: the frame transfer functions are copied from vaapi, as the
mapping makes copying generic boilerplate. Mapping itself is not
exported by the VT code, because I don't know how to test.
* commit '019ab88a95cb31b698506d90e8ce56695a7f1cc5':
lavc: add an option for exporting cropping information to the caller
Merged-by: James Almer <jamrial@gmail.com>
* commit 'e435beb1ea5380a90774dbf51fdc8c941e486551':
crypto: consistently use size_t as type for length parameters
Merged-by: Clément Bœsch <cboesch@gopro.com>
This is a newer API that is intended for decoders like the cuvid
wrapper. Until now, the wrapper required to set an awkward
"incomplete" hw_frames_ctx to set the device. Now the device
can be set directly, and the user can get AV_PIX_FMT_CUDA output
for a specific device simply by setting hw_device_ctx.
This still does a dummy ff_get_format() call at init time, and should
be fully backward compatible.
Use the flags argument of av_hwframe_ctx_create_derived() to pass the
mapping flags which will be used on allocation. Also, set the format
and hardware context on the allocated frame automatically - the user
should not be required to do this themselves.
The old "API" that signaled rotation as a metadata value has been
replaced by DISPLAYMATRIX side data quite a while ago.
There is no reason to make muxers/demuxers/API users support both. In
addition, the metadata API is dangerous, as user tags could "leak" into
it, creating unintended features or bugs.
ffmpeg CLI has to be updated to use the new API. In particular, we must
not allow to leak the "rotate" tag into the muxer. Some muxers will
catch this properly (like mov), but others (like mkv) can add it as
generic tag. Note applications, which use libavformat and assume the
old rotate API, will interpret such "rotate" user tags as rotate
metadata (which it is not), and incorrectly rotate the video.
The ffmpeg/ffplay tools drop the use of the old API for muxing and
demuxing, as all muxers/demuxers support the new API. This will mean
that the tools will not mistakenly interpret per-track "rotate" user
tags as rotate metadata. It will _not_ be treated as regression.
Unfortunately, hacks have been added, that allow the user to override
rotation by setting metadata explicitly, e.g. via
-metadata:s:v:0 rotate=0
See references to trac #4560. fate-filter-meta-4560-rotate0 tests this.
It's easier to adjust the hack for supporting it than arguing for its
removal, so ffmpeg CLI now explicitly catches this case, and essentially
replaces the "rotate" value with a display matrix side data. (It would
be easier for both user and implementation to create an explicit option
for rotation.)
When the code under FF_API_OLD_ROTATE_API is disabled, one FATE
reference file has to be updated (because "rotate" is not exported
anymore).
Tested-by: Michael Niedermayer <michael@niedermayer.cc>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
This supports retrieving the device from a provided hw_frames_ctx, and
automatically creating a hw_frames_ctx if hw_device_ctx is set.
The old API is not deprecated yet. The user can still use
av_vdpau_bind_context() (with or without setting hw_frames_ctx), or use
the API before that by allocating and setting hwaccel_context manually.
Cherry-picked from Libav commit 1a7ddba5.
(Adds missing APIchanges entry to the Libav version.)
Reviewed-by: Mark Thompson <sw@jkqxz.net>
This "reuses" the flags introduced for the av_vdpau_bind_context() API
function, and makes them available to all hwaccels. This does not affect
the current vdpau API, as av_vdpau_bind_context() should obviously
override the AVCodecContext.hwaccel_flags flags for the sake of
compatibility.
Cherry-picked from Libav commit 16a163b5.
Reviewed-by: Mark Thompson <sw@jkqxz.net>
* commit '8ea35af7620e4f73f9e8c072e1c0fac9a04ec161':
avio: add a new flag for marking streams seekable by timestamp
Merged-by: James Almer <jamrial@gmail.com>
This patch deprecates anything that has to do with merging/splitting
side data. Automatic side data merging (and splitting), as well as all
API symbols involved in it, are removed completely.
Two FF_API_ defines are dedicated to deprecating API symbols related to
this: FF_API_MERGE_SD_API removes av_packet_split/merge_side_data in
libavcodec, and FF_API_LAVF_KEEPSIDE_FLAG deprecates
AVFMT_FLAG_KEEP_SIDE_DATA in libavformat.
Since it was claimed that changing the default from merging side data to
not doing it is an ABI change, there are two additional FF_API_ defines,
which stop using the side data merging/splitting by default (and remove
any code in avformat/avcodec doing this): FF_API_MERGE_SD in libavcodec,
and FF_API_LAVF_MERGE_SD in libavformat.
It is very much intended that FF_API_MERGE_SD and FF_API_LAVF_MERGE_SD
are quickly defined to 0 in the next ABI bump, while the API symbols are
retained for a longer time for the sake of compatibility.
AVFMT_FLAG_KEEP_SIDE_DATA will (very much intentionally) do nothing for
most of the time it will still be defined. Keep in mind that no code
exists that actually tries to unset this flag for any reason, nor does
such code need to exist. Code setting this flag explicitly will work as
before. Thus it's ok for AVFMT_FLAG_KEEP_SIDE_DATA to do nothing once
side data merging has been removed from libavformat.
In order to avoid that anyone in the future does this incorrectly, here
is a small guide how to update the internal code on bumps:
- next ABI bump (probably soon):
- define FF_API_LAVF_MERGE_SD to 0, and remove all code covered by it
- define FF_API_MERGE_SD to 0, and remove all code covered by it
- next API bump (typically two years in the future or so):
- define FF_API_LAVF_KEEPSIDE_FLAG to 0, and remove all code covered
by it
- define FF_API_MERGE_SD_API to 0, and remove all code covered by it
This forces anyone who actually wants packet side data to temporarily
use deprecated API to get it all. If you ask me, this is batshit fucked
up crazy, but it's how we roll. Making AVFMT_FLAG_KEEP_SIDE_DATA to be
set by default was rejected as an ABI change, so I'm going all the way
to get rid of this once and for all.
Reviewed-by: James Almer <jamrial@gmail.com>
Reviewed-by: Rostislav Pehlivanov <atomnuker@gmail.com>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
This "reuses" the flags introduced for the av_vdpau_bind_context() API
function, and makes them available to all hwaccels. This does not affect
the current vdpau API, as av_vdpau_bind_context() should obviously
override the AVCodecContext.hwaccel_flags flags for the sake of
compatibility.
Adds functions to convert to/from strings and a function to iterate
over all supported device types. Also adds a new invalid type
AV_HWDEVICE_TYPE_NONE, which acts as a sentinel value.
* commit 'd7bc52bf456deba0f32d9fe5c288ec441f1ebef5':
imgutils: add a function for copying image data from GPU mapped memory
Merged-by: Clément Bœsch <u@pkh.me>
Not used by anything at all since we don't auto insert lavr filters.
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
If AVVideotoolboxContext.cv_pix_fmt_type is set to 0, don't set the
kCVPixelBufferPixelFormatTypeKey value on the VT decoder.
This makes VT output its native format, which can be much faster on
some hardware iterations (if the native format does not match with
the requested format, it will be converted, which is slow).
The default is still forcing nv12.
Allow all struct fields to be accessed directly, as long as they're
public.
Before this change, many fields were "public", but could be accessed via
AVOption only. This meant they were effectively not public, but were
present for documentation purposes, which was incredibly confusing at
best.
This is an extended version of the AVFrame.opaque field, which can be
used to attach arbitrary user information to an AVFrame.
The usefulness of the opaque field is rather limited, because it can
store only up to 32 bits of information (or 64 bit on 64 bit systems).
It's not possible to set this field to a memory allocation, because
there is no way to deallocate it correctly.
The opaque_ref field circumvents this by letting the user set an
AVBuffer, which makes the user data refcounted.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Merges Libav commit 04f3bd3496.