Right now, if we attempt to use cuvid in a media player and then
try to seek, the decoder will happily pass out whatever frames were
already in flight before the seek.
There is both the output queue in our code and some number of frames
within the cuvid decoder that need to be accounted for.
cuvid doesn't support flush, so our only choice is to do a brute-force
re-creation of the decoder, which also implies re-creating the parser,
but this is fine.
The only subtlty is that there is sanity check code in decoder
initialisation that wants to make sure the HWContextFrame hasn't already
been initialised. This is a fair check to do at the beginning but not
after a flush, so it has to be made conditional.
Signed-off-by: Philip Langdale <philipl@overt.org>
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
cuvid/nvdecode also supports mpeg1, mpeg2, h.263/mpeg4-asp and mjpeg.
It should, in theory, also support wmv3 via the vc1 support, given
that vdpau supports this. However, it failed to play wmv3 samples
which vdpau played correctly, so I'm not sure what to make of it.
Signed-off-by: Philip Langdale <philipl@overt.org>
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
While it is less featureful (and slower) than the built-in H264
decoder, one could potentially want to use it to take advantage
of the cisco patent license offer.
Signed-off-by: Martin Storsjö <martin@martin.st>
* commit 'e47b8bbf0b54599d44b9330eb4d68cdde4f6d298':
avcodec: Bump micro version after changing public JPEG 2000 defines
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
Currently it's exported as AVFrame.pkt_pts, which is also the only use
for that field. The reason it is done like this is that lavc used to
export various codec-specific "timing" information in AVFrame.pts, which
is not done anymore.
Since it is confusing to the callers to have a separate field which is
used only for decoder timestamps and nothing else, deprecate pkt_pts and
use just AVFrame.pts everywhere.
This avoids enabling and building the x264rgb encoder when its actually not supported and
thus would not work
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This function is supposed to "reset" a codec context to a clean state so
that it can be opened again. The only reason it exists is to allow using
AVStream.codec as a decoding context (after it was already
opened/used/closed by avformat_find_stream_info()). Since that behaviour
is now deprecated, there is no reason for this function to exist
anymore.
Since AVCodecContext contains a lot of complex state, copying a codec
context is not a well-defined operation. The purpose for which it is
typically used (which is well-defined) is copying the stream parameters
from one codec context to another. That is now possible with through the
AVCodecParameters API. Therefore, there is no reason for
avcodec_copy_context() to exist.
* commit '1bb56abb9b37bd208a66164339c92cad59b1087b':
omx: Add support for zerocopy input of frames
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
* commit 'f1cd9b03f3fa875eb5e394281b4b688cec611658':
omx: Add support for broadcom OMX on raspberry pi
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
* commit 'e8919ec486a5559fdcf366e347be0656d904a87f':
libavcodec: Add H264/MPEG4 encoders based on OpenMAX IL
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Register mmaldec as mpeg2 decoder. Supporting mpeg2 in mmaldec is just a
matter of setting the correct MMAL_ENCODING on the input port. To ease the
addition of further supported mmal codecs a macro is introduced to generate
the decoder and decoder class structs.
Signed-off-by: Julian Scheel <julian@jusst.de>
Signed-off-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Until now, the decoding API was restricted to outputting 0 or 1 frames
per input packet. It also enforces a somewhat rigid dataflow in general.
This new API seeks to relax these restrictions by decoupling input and
output. Instead of doing a single call on each decode step, which may
consume the packet and may produce output, the new API requires the user
to send input first, and then ask for output.
For now, there are no codecs supporting this API. The API can work with
codecs using the old API, and most code added here is to make them
interoperate. The reverse is not possible, although for audio it might.
From Libav commit 05f66706d1.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
The bits_per_raw_sample represents the number of bits of precision per sample.
The field is added at the logical place, not at the end as the code was just
recently added
This fixes the regression about losing the audio sample precision information
The change in the fate test checksum un-does the change from the merge
Previous version reviewed by: wm4 <nfxjfg@googlemail.com>
Previous version reviewed by: Dominik 'Rathann' Mierzejewski <dominik@greysector.net>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This can only be used if the input data happens to be laid out
exactly correctly.
This might not be supported on all encoders, so only enable it
with an option, but enable it automatically on raspberry pi,
where it is known to be supported.
Signed-off-by: Martin Storsjö <martin@martin.st>
The raspberry pi uses the alternative API/ABI for OMX; this makes
such builds incompatible with all the normal OpenMAX implementations.
Since this can't easily be detected at configure time (one can
build for raspberry pi's OMX just fine using the generic, pristine
Khronos OpenMAX IL headers, no need for their own extensions),
require a separate configure switch for it instead.
The broadcom host library can't be unloaded once loaded and started;
the deinit function that it provides is a no-op, and after started,
it has got background threads running, so dlclosing it makes it
crash.
Signed-off-by: Martin Storsjö <martin@martin.st>
* commit '998e1b8f521b73e1ed3a13caaabcf79eb401cf0d':
lavc: add codec parameters API
Fixes added in:
- bit_rate has been made int64_t to match.
- profile and level are properly initialize.
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Until now, the decoding API was restricted to outputting 0 or 1 frames
per input packet. It also enforces a somewhat rigid dataflow in general.
This new API seeks to relax these restrictions by decoupling input and
output. Instead of doing a single call on each decode step, which may
consume the packet and may produce output, the new API requires the user
to send input first, and then ask for output.
For now, there are no codecs supporting this API. The API can work with
codecs using the old API, and most code added here is to make them
interoperate. The reverse is not possible, although for audio it might.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Check that the required plane pointers and only
those are set up.
Currently does not enforce anything for the palette
pointer of pseudopal formats as I am unsure about the
requirements.
Signed-off-by: Reimar Döffinger <Reimar.Doeffinger@gmx.de>
Some containers, like webm/mkv, will contain this mastering metadata.
This is analogous to the way 3D fpa data is handled (in frame and
packet side data).
Signed-off-by: Neil Birkbeck <neil.birkbeck@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* commit '7b3214d0050613bd347a2e41c9f78ffb766da25e':
lavc: add a field for passing AVHWFramesContext to encoders
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
This allows to copy information related to the stream ID from the demuxer
to the muxer, thus allowing for example to retain information related to
synchronous and asynchronous KLV data packets. This information is used
in the muxer when remuxing to distinguish the two kind of packets (if the
information is lacking, data packets are considered synchronous).
The fate reference changes are due to the use of
av_packet_merge_side_data(), which increases the size of the output
packet size, since side data is merged into the packet data.
This API is intended to allow passing around codec parameters without
using full AVCodecContext (which also contains codec options and
encoder/decoder state).
This commit adds a new encoder capable of creating BBC/SMPTE Dirac/VC-2 HQ
profile files.
Dirac is a wavelet based codec created by the BBC a little more than 10
years ago. Since then, wavelets have mostly gone out of style as they
did not provide adequate encoding gains at lower bitrates. Dirac was a
fully featured video codec equipped with perceptual masking, support for
most popular pixel formats, interlacing, overlapped-block motion
compensation, and other features. It found new life after being stripped
of various features and standardized as the VC-2 codec by the SMPTE with
an extra profile, the HQ profile that this encoder supports, added.
The HQ profile was based off of the Low-Delay profile previously
existing in Dirac. The profile forbids DC prediction and arithmetic
coding to focus on high performance and low delay at higher bitrates.
The standard bitrates for this profile vary but generally 1:4
compression is expected (~525 Mbps vs the 2200 Mbps for uncompressed
1080p50). The codec only supports I-frames, hence the high bitrates.
The structure of this encoder is simple: do a DWT transform on the
entire image, split it into multiple slices (specified by the user) and
encode them in parallel. All of the slices are of the same size, making
rate control and threading very trivial. Although only in C, this encoder
is capable of 30 frames per second on an 4 core 8 threads Ivy Bridge.
A lookup table is used to encode most of the coefficients.
No code was used from the GSoC encoder from 2007 except for the 2
transform functions in diracenc_transforms.c. All other code was written
from scratch.
This encoder outperforms any other encoders in quality, usability and in
features. Other existing implementations do not support 4 level
transforms or 64x64 blocks (slices), which greatly increase compression.
As previously said, the codec is meant for broadcasting, hence support
for non-broadcasting image widths, heights, bit depths, aspect ratios,
etc. are limited by the "level". Although this codec supports a few
chroma subsamplings (420, 422, 444), signalling those is generally
outside the specifications of the level used (3) and the reference
decoder will outright refuse to read any image with such a flag
signalled (it only supports 1920x1080 yuv422p10). However, most
implementations will happily read files with alternate dimensions,
framerates and formats signalled.
Therefore, in order to encode files other than 1080p50 yuv422p10le, you
need to provide an "-strict -2" argument to the command line. The FFmpeg
decoder will happily read any files made with non-standard parameters,
dimensions and subsamplings, and so will other implementations. IMO this
should be "-strict -1", but I'll leave that up for discussion.
There are still plenty of stuff to implement, for instance 5 more
wavelet transforms are still in the specs and supported by the decoder.
The encoder can be lossless, given a high enough bitrate.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
* commit 'd43a165bda0eae95f4c7a168c7d13d94966c1a09':
imgconvert: Add the proper API guards to a deprecated function
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
The b_frame_strategy option is only used by mpegvideoenc, qsv, x264, and
xavs, while b_sensitivity is only used by mpegvideoenc.
These are very codec-specific options, so deprecate the global variants.
Set proper limits to the maximum allowed values.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
It serves absolutely no purpose other than to confuse potentional
Android developers about how to use hardware acceleration properly
on the the platform. The stagefright "API" is not public, and the
MediaCodec API is the proper way to do this.
Furthermore, stagefright support in avcodec needs a series of
magic incantations and version-specific stuff, such that
using it actually provides downsides compared just using the actual
Android frameworks properly, in that it is a lot more work and confusion
to get it even running. It also leads to a lot of misinformation, like
these sorts of comments (in [1]) that are absolutely incorrect.
[1] http://stackoverflow.com/a/29362353/3115956
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
* commit '31c51f7441de07b88cfea2550245bf1f5140cb8f':
avpacket: add a function for wrapping existing data as side data
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
These variables are coming from mpegvideoenc where are supposedly used
as bit counters on various frame properties. However their use is
unclear as they lack documentation, are available only from a very small
subset of encoders, and they are hardly used in the wild. Also frame_bits
in aacenc is employed in a similar way.
Remove this functionality from AVCodecContex, these variable are mostly
frame properties, and too few encoders support setting them with anything
useful.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Most option values are simply unused or ignored and in practice the
majory of codecs only need to check whether to enable rle or not.
Add appropriate codec private options which better expose the allowed
features.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
* commit '79ae1e630b476889c251fc905687a3831b43ab5e':
avcodec: Define side data type for fallback track
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
This function returns the encoded data of a frame, one slice at a time
directly when that slice is encoded, instead of waiting for the full
frame to be done. However this field has a debatable usefulness, since
it looks like it is just a convoluted way to get data at lowest
possible latency, or a somewhat hacky way to store h263 in RFC-2190
rtp encapsulation.
Moreover when multi-threading is enabled (which is by default) the order
of returned slices is not deterministic at all, making the use of this
function not reliable at all (or at the very least, more complicated
than it should be).
So, for the reasons stated above, and being used by only a single encoder
family (mpegvideo), this field is deemed unnecessary, overcomplicated,
and not really belonging to libavcodec. Libavformat features a complete
implementation of RFC-2190, for any other case.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This side data type is meant to be added to AVStream side data.
A fallback track indicates an alternate track to use when the
current track can not be decoded for some reason. e.g. no
decoder available for codec.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Use the new fields directly instead of the ones from AVPicture.
This removes a layer of indirection which serves no pratical purpose
whatsoever, and will help in removing AVPicture structure completely
later.
Every subtitle encoder/decoder seamlessly points to the new arrays,
so it is possible to deprecate AVSubtitleRect.pict.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This commit shall introduce the first step of adding support for the
Daala next generation video codec to FFmpeg. Although still in
development, the codec is showing good progress and exchanging work
through IETF drafts. The companies behind Daala are also participating
in the Alliance for Open Media, so it's likely that whatever the result
any of these collaborations produce it's probable that elements from
Daala could be used in them, or perhaps this codec itself could be the
result.
* commit '948f3c19a8bd069768ca411212aaf8c1ed96b10d':
lavc: Make AVPacket.duration int64, and deprecate convergence_duration
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
Note that convergence_duration had another meaning, one which was in
practice never used. The only real use for it was a 64 bit replacement
for the duration field. It's better just to make duration 64 bits, and
to get rid of it.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This function can intrinsically not deal with codec profile fallback
(for H.264 Constrained Baseline especially), and was made redundant
by av_vdpau_bind_context().
Signed-off-by: Anton Khirnov <anton@khirnov.net>
* commit 'e3d4784eb31b3ea4a97f2d4c698a75fab9bf3d86':
d3d11va: WindowsPhone requires a mutex around ID3D11VideoContext
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>