This also adds support to avconv (which is trivial due to the new
hwaccel API being generic enough).
The new decoder setup code in dxva2.c is significantly based on work by
Steve Lhomme <robux4@gmail.com>, but with heavy changes/rewrites.
Merges Libav commit f9e7a2f95a.
Also adds untested VP9 support.
The check for DXVA2 COBJs is removed. Just update your MinGW to
something newer than a 5 year old release.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
To be used with the new d3d11 hwaccel decode API.
With the new hwaccel API, we don't want surfaces to depend on the
decoder (other than the required dimension and format). The old D3D11VA
pixfmt uses ID3D11VideoDecoderOutputView pointers, which include the
decoder configuration, and thus is incompatible with the new hwaccel
API. This patch introduces AV_PIX_FMT_D3D11, which uses ID3D11Texture2D
and an index. It's simpler and compatible with the new hwaccel API.
The introduced hwcontext supports only the new pixfmt.
Frame upload code untested.
Significantly based on work by Steve Lhomme <robux4@gmail.com>, but with
heavy changes/rewrites.
Merges Libav commit fff90422d1.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
Use the flags argument of av_hwframe_ctx_create_derived() to pass the
mapping flags which will be used on allocation. Also, set the format
and hardware context on the allocated frame automatically - the user
should not be required to do this themselves.
(cherry picked from commit c5714b51aa)
Adds functions to convert to/from strings and a function to iterate
over all supported device types. Also adds a new invalid type
AV_HWDEVICE_TYPE_NONE, which acts as a sentinel value.
(cherry picked from commit b7487f4f3c)
This also adds support to avconv (which is trivial due to the new
hwaccel API being generic enough).
The new decoder setup code in dxva2.c is significantly based on work by
Steve Lhomme <robux4@gmail.com>, but with heavy changes/rewrites.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
To be used with the new d3d11 hwaccel decode API.
With the new hwaccel API, we don't want surfaces to depend on the
decoder (other than the required dimension and format). The old D3D11VA
pixfmt uses ID3D11VideoDecoderOutputView pointers, which include the
decoder configuration, and thus is incompatible with the new hwaccel
API. This patch introduces AV_PIX_FMT_D3D11, which uses ID3D11Texture2D
and an index. It's simpler and compatible with the new hwaccel API.
The introduced hwcontext supports only the new pixfmt.
Frame upload code untested.
Significantly based on work by Steve Lhomme <robux4@gmail.com>, but with
heavy changes/rewrites.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
This adds tons of code for no other benefit than making VideoToolbox
support conform with the new hwaccel API (using hw_device_ctx and
hw_frames_ctx).
Since VideoToolbox decoding does not actually require the user to
allocate frames, the new code does mostly nothing.
One benefit is that ffmpeg_videotoolbox.c can be dropped once generic
hwaccel support for ffmpeg.c is merged from Libav.
Does not consider VDA or VideoToolbox encoding.
Fun fact: the frame transfer functions are copied from vaapi, as the
mapping makes copying generic boilerplate. Mapping itself is not
exported by the VT code, because I don't know how to test.
* commit '019ab88a95cb31b698506d90e8ce56695a7f1cc5':
lavc: add an option for exporting cropping information to the caller
Merged-by: James Almer <jamrial@gmail.com>
* commit 'e435beb1ea5380a90774dbf51fdc8c941e486551':
crypto: consistently use size_t as type for length parameters
Merged-by: Clément Bœsch <cboesch@gopro.com>
This is a newer API that is intended for decoders like the cuvid
wrapper. Until now, the wrapper required to set an awkward
"incomplete" hw_frames_ctx to set the device. Now the device
can be set directly, and the user can get AV_PIX_FMT_CUDA output
for a specific device simply by setting hw_device_ctx.
This still does a dummy ff_get_format() call at init time, and should
be fully backward compatible.
Use the flags argument of av_hwframe_ctx_create_derived() to pass the
mapping flags which will be used on allocation. Also, set the format
and hardware context on the allocated frame automatically - the user
should not be required to do this themselves.
The old "API" that signaled rotation as a metadata value has been
replaced by DISPLAYMATRIX side data quite a while ago.
There is no reason to make muxers/demuxers/API users support both. In
addition, the metadata API is dangerous, as user tags could "leak" into
it, creating unintended features or bugs.
ffmpeg CLI has to be updated to use the new API. In particular, we must
not allow to leak the "rotate" tag into the muxer. Some muxers will
catch this properly (like mov), but others (like mkv) can add it as
generic tag. Note applications, which use libavformat and assume the
old rotate API, will interpret such "rotate" user tags as rotate
metadata (which it is not), and incorrectly rotate the video.
The ffmpeg/ffplay tools drop the use of the old API for muxing and
demuxing, as all muxers/demuxers support the new API. This will mean
that the tools will not mistakenly interpret per-track "rotate" user
tags as rotate metadata. It will _not_ be treated as regression.
Unfortunately, hacks have been added, that allow the user to override
rotation by setting metadata explicitly, e.g. via
-metadata:s:v:0 rotate=0
See references to trac #4560. fate-filter-meta-4560-rotate0 tests this.
It's easier to adjust the hack for supporting it than arguing for its
removal, so ffmpeg CLI now explicitly catches this case, and essentially
replaces the "rotate" value with a display matrix side data. (It would
be easier for both user and implementation to create an explicit option
for rotation.)
When the code under FF_API_OLD_ROTATE_API is disabled, one FATE
reference file has to be updated (because "rotate" is not exported
anymore).
Tested-by: Michael Niedermayer <michael@niedermayer.cc>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
This supports retrieving the device from a provided hw_frames_ctx, and
automatically creating a hw_frames_ctx if hw_device_ctx is set.
The old API is not deprecated yet. The user can still use
av_vdpau_bind_context() (with or without setting hw_frames_ctx), or use
the API before that by allocating and setting hwaccel_context manually.
Cherry-picked from Libav commit 1a7ddba5.
(Adds missing APIchanges entry to the Libav version.)
Reviewed-by: Mark Thompson <sw@jkqxz.net>
This "reuses" the flags introduced for the av_vdpau_bind_context() API
function, and makes them available to all hwaccels. This does not affect
the current vdpau API, as av_vdpau_bind_context() should obviously
override the AVCodecContext.hwaccel_flags flags for the sake of
compatibility.
Cherry-picked from Libav commit 16a163b5.
Reviewed-by: Mark Thompson <sw@jkqxz.net>
* commit '8ea35af7620e4f73f9e8c072e1c0fac9a04ec161':
avio: add a new flag for marking streams seekable by timestamp
Merged-by: James Almer <jamrial@gmail.com>
This patch deprecates anything that has to do with merging/splitting
side data. Automatic side data merging (and splitting), as well as all
API symbols involved in it, are removed completely.
Two FF_API_ defines are dedicated to deprecating API symbols related to
this: FF_API_MERGE_SD_API removes av_packet_split/merge_side_data in
libavcodec, and FF_API_LAVF_KEEPSIDE_FLAG deprecates
AVFMT_FLAG_KEEP_SIDE_DATA in libavformat.
Since it was claimed that changing the default from merging side data to
not doing it is an ABI change, there are two additional FF_API_ defines,
which stop using the side data merging/splitting by default (and remove
any code in avformat/avcodec doing this): FF_API_MERGE_SD in libavcodec,
and FF_API_LAVF_MERGE_SD in libavformat.
It is very much intended that FF_API_MERGE_SD and FF_API_LAVF_MERGE_SD
are quickly defined to 0 in the next ABI bump, while the API symbols are
retained for a longer time for the sake of compatibility.
AVFMT_FLAG_KEEP_SIDE_DATA will (very much intentionally) do nothing for
most of the time it will still be defined. Keep in mind that no code
exists that actually tries to unset this flag for any reason, nor does
such code need to exist. Code setting this flag explicitly will work as
before. Thus it's ok for AVFMT_FLAG_KEEP_SIDE_DATA to do nothing once
side data merging has been removed from libavformat.
In order to avoid that anyone in the future does this incorrectly, here
is a small guide how to update the internal code on bumps:
- next ABI bump (probably soon):
- define FF_API_LAVF_MERGE_SD to 0, and remove all code covered by it
- define FF_API_MERGE_SD to 0, and remove all code covered by it
- next API bump (typically two years in the future or so):
- define FF_API_LAVF_KEEPSIDE_FLAG to 0, and remove all code covered
by it
- define FF_API_MERGE_SD_API to 0, and remove all code covered by it
This forces anyone who actually wants packet side data to temporarily
use deprecated API to get it all. If you ask me, this is batshit fucked
up crazy, but it's how we roll. Making AVFMT_FLAG_KEEP_SIDE_DATA to be
set by default was rejected as an ABI change, so I'm going all the way
to get rid of this once and for all.
Reviewed-by: James Almer <jamrial@gmail.com>
Reviewed-by: Rostislav Pehlivanov <atomnuker@gmail.com>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
This "reuses" the flags introduced for the av_vdpau_bind_context() API
function, and makes them available to all hwaccels. This does not affect
the current vdpau API, as av_vdpau_bind_context() should obviously
override the AVCodecContext.hwaccel_flags flags for the sake of
compatibility.
Adds functions to convert to/from strings and a function to iterate
over all supported device types. Also adds a new invalid type
AV_HWDEVICE_TYPE_NONE, which acts as a sentinel value.
* commit 'd7bc52bf456deba0f32d9fe5c288ec441f1ebef5':
imgutils: add a function for copying image data from GPU mapped memory
Merged-by: Clément Bœsch <u@pkh.me>
Not used by anything at all since we don't auto insert lavr filters.
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
If AVVideotoolboxContext.cv_pix_fmt_type is set to 0, don't set the
kCVPixelBufferPixelFormatTypeKey value on the VT decoder.
This makes VT output its native format, which can be much faster on
some hardware iterations (if the native format does not match with
the requested format, it will be converted, which is slow).
The default is still forcing nv12.
Allow all struct fields to be accessed directly, as long as they're
public.
Before this change, many fields were "public", but could be accessed via
AVOption only. This meant they were effectively not public, but were
present for documentation purposes, which was incredibly confusing at
best.
This is an extended version of the AVFrame.opaque field, which can be
used to attach arbitrary user information to an AVFrame.
The usefulness of the opaque field is rather limited, because it can
store only up to 32 bits of information (or 64 bit on 64 bit systems).
It's not possible to set this field to a memory allocation, because
there is no way to deallocate it correctly.
The opaque_ref field circumvents this by letting the user set an
AVBuffer, which makes the user data refcounted.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Merges Libav commit 04f3bd3496.
This is an extended version of the AVFrame.opaque field, which can be
used to attach arbitrary user information to an AVFrame.
The usefulness of the opaque field is rather limited, because it can
store only up to 32 bits of information (or 64 bit on 64 bit systems).
It's not possible to set this field to a memory allocation, because
there is no way to deallocate it correctly.
The opaque_ref field circumvents this by letting the user set an
AVBuffer, which makes the user data refcounted.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
No deprecation guards, because the old decode API (for which this field
is needed) doesn't have any either.
This field should be removed together with the old decode calls.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
* commit '7d7355aa92bb36ca0765c49a569a999bcb96f332':
x86: Add SSSE3_SLOW CPU flag and related convenience macros
Merged-by: James Almer <jamrial@gmail.com>
Return a channel layout and the number of channels based on the specified name.
This function is similar to av_get_channel_layout(), but can also parse unknown
channel layout specifications.
Unknown channel layout specifications are a decimal number and a capital 'C'
suffix, in order to not break compatibility with the lowercase 'c' suffix,
which is used for a guessed channel layout with the specified number of
channels.
Signed-off-by: Marton Balint <cus@passwd.hu>
This commit adds the avio_get_dyn_buf function which allows accessing
the
content of a DynBuffer without destroying it.
This is required in matroskaenc for preliminary writing (correct) mkv
headers.
Context for this change is fixing regression bug #5977.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
While no decoder currently exports spherical information, this type
represents a frame property that has to be passed through from container
to frames.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
While no decoder currently exports spherical information, this type
represents a frame property that has to be passed through from container
to frames.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Functionally similar to av_packet_add_side_data(). Allows the use of an
already allocated buffer as stream side data.
Signed-off-by: James Almer <jamrial@gmail.com>
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Functionally similar to av_packet_add_side_data(). Allows the use of an
already allocated buffer as stream side data.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: James Almer <jamrial@gmail.com>
The Intel binary iHD driver does not support the
VASurfaceAttribMemoryType, so surface allocation will fail when using
it.
(cherry picked from commit 2124711b95)
The driver being used is detected inside av_hwdevice_ctx_init() and
the quirks field then set from a table of known device. If this
behaviour is unwanted, the user can also set the quirks field
manually.
Also adds the Intel i965 driver quirk (it does not destroy parameter
buffers used in a call to vaRenderPicture()) and detects that driver
to set it.
(cherry picked from commit 4926fa9a4a)
Adds the new av_hwframe_map() function, which allows mapping between
hardware frames and normal memory, along with internal support for
implementing it.
Also adds av_hwframe_ctx_create_derived(), for creating a hardware
frames context associated with one device using frames mapped from
another by some hardware-specific means.
This allows a consumer to run the muxer's init function without actually
writing the header, which is useful in chained muxers that support
automatic bitstream filtering.
* commit '32c8359093d1ff4f45ed19518b449b3ac3769d27':
lavc: export the timestamps when decoding in AVFrame.pts
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
This will be used to allow writing file sequences using the tee output onto
multiple places in parallel
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The driver being used is detected inside av_hwdevice_ctx_init() and
the quirks field then set from a table of known device. If this
behaviour is unwanted, the user can also set the quirks field
manually.
Also adds the Intel i965 driver quirk (it does not destroy parameter
buffers used in a call to vaRenderPicture()) and detects that driver
to set it.
P010 is the 10-bit variant of NV12 (planar luma, packed chroma), using two
bytes per component to store 10-bit data plus 6-bit zeroes in the LSBs.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
* commit 'e47b8bbf0b54599d44b9330eb4d68cdde4f6d298':
avcodec: Bump micro version after changing public JPEG 2000 defines
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
* commit 'db7968bff4851c2be79b15b2cb2ae747424d2fca':
avio: Allow custom IO users to get labels for the output bytestream
Merged-by: Matthieu Bouron <matthieu.bouron@stupeflix.com>
* commit '0c4468dc185fa8b9e7d6add914595c5e928b24fd':
stereo3d: Add API to get name from value or value from name
Merged-by: Clément Bœsch <clement@stupeflix.com>
Currently it's exported as AVFrame.pkt_pts, which is also the only use
for that field. The reason it is done like this is that lavc used to
export various codec-specific "timing" information in AVFrame.pts, which
is not done anymore.
Since it is confusing to the callers to have a separate field which is
used only for decoder timestamps and nothing else, deprecate pkt_pts and
use just AVFrame.pts everywhere.
This allows callers with avio write callbacks to get the bytestream
positions that correspond to keyframes, suitable for live streaming.
In the simplest form, a caller could expect that a header is written
to the bytestream during the avformat_write_header, and the data
output to the avio context during e.g. av_write_frame corresponds
exactly to the current packet passed in.
When combined with av_interleaved_write_frame, and with muxers that
do buffering (most muxers that do some sort of fragmenting or
clustering), the mapping from input data to bytestream positions
is nontrivial.
This allows callers to get directly information about what part
of the bytestream is what, without having to resort to assumptions
about the muxer behaviour.
One keyframe/fragment/block can still be split into multiple (if
they are larger than the aviocontext buffer), which would call
the callback with e.g. AVIO_DATA_MARKER_SYNC_POINT, followed by
AVIO_DATA_MARKER_UNKNOWN for the second time it is called with
the following data.
Signed-off-by: Martin Storsjö <martin@martin.st>
The new function behaves the same as av_log_format_line, but also forwards
the return value from the underlying snprintf call. This will allow
callers to accurately determine the size requirements for the line buffer.
Signed-off-by: Andreas Weis <github@ghulbus-inc.de>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Until now, the decoding API was restricted to outputting 0 or 1 frames
per input packet. It also enforces a somewhat rigid dataflow in general.
This new API seeks to relax these restrictions by decoupling input and
output. Instead of doing a single call on each decode step, which may
consume the packet and may produce output, the new API requires the user
to send input first, and then ask for output.
For now, there are no codecs supporting this API. The API can work with
codecs using the old API, and most code added here is to make them
interoperate. The reverse is not possible, although for audio it might.
From Libav commit 05f66706d1.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
* commit '3e8fd93b6ab219221e17fa2b6243cc72cf2d69dc':
lavf: add a missing bump and APIchanges for the codecpar switch
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
* commit 'a8068346e48e123f8d3bdf4d64464d81e53e5fc7':
lavc: add a variant of av_get_audio_frame_duration working with AVCodecParameters
Fixes from jamrial incorporated.
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
* commit '998e1b8f521b73e1ed3a13caaabcf79eb401cf0d':
lavc: add codec parameters API
Fixes added in:
- bit_rate has been made int64_t to match.
- profile and level are properly initialize.
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Until now, the decoding API was restricted to outputting 0 or 1 frames
per input packet. It also enforces a somewhat rigid dataflow in general.
This new API seeks to relax these restrictions by decoupling input and
output. Instead of doing a single call on each decode step, which may
consume the packet and may produce output, the new API requires the user
to send input first, and then ask for output.
For now, there are no codecs supporting this API. The API can work with
codecs using the old API, and most code added here is to make them
interoperate. The reverse is not possible, although for audio it might.
Signed-off-by: Anton Khirnov <anton@khirnov.net>