The way videotoolbox hooks in as a hwaccel is pretty hacky. The VT decode
API is not invoked until end_frame(), so alloc_frame() returns a dummy
frame with a 1-byte buffer. When end_frame() is eventually called, the
dummy buffer is replaced with the actual decoded data from
VTDecompressionSessionDecodeFrame().
When the VT decoder fails, the frame returned to the h264 decoder from
alloc_frame() remains invalid and should not be used. Before
9747219958, it was accidentally being
returned all the way up to the API user. After that commit, the dummy
frame was unref'd so the user received an error.
However, since that commit, VT hwaccel failures started causing random
segfaults in the h264 decoder. This happened more often on iOS where the
VT implementation is more likely to throw errors on bitstream anomolies.
A recent report of this issue can be see in
http://ffmpeg.org/pipermail/libav-user/2016-November/009831.html
The issue here is that the dummy frame is still referenced internally by the
h264 decoder, as part of the reflist and cur_pic_ptr. Deallocating the
frame causes assertions like this one to trip later on during decoding:
Assertion h->cur_pic_ptr->f->buf[0] failed at src/libavcodec/h264_slice.c:1340
With this commit, we leave the dummy 1-byte frame intact, but avoid returning it
to the user.
This reverts commit 9747219958.
Signed-off-by: wm4 <nfxjfg@googlemail.com>
If AVVideotoolboxContext.cv_pix_fmt_type is set to 0, don't set the
kCVPixelBufferPixelFormatTypeKey value on the VT decoder.
This makes VT output its native format, which can be much faster on
some hardware iterations (if the native format does not match with
the requested format, it will be converted, which is slow).
The default is still forcing nv12.
Allow all struct fields to be accessed directly, as long as they're
public.
Before this change, many fields were "public", but could be accessed via
AVOption only. This meant they were effectively not public, but were
present for documentation purposes, which was incredibly confusing at
best.
We can pick the correct slice index directly from the ID3D11VideoDecoderOutputView
casted from data[3].
Also added myself as maintainer for DXVA2 and D3D11VA.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This decoder can decode all existing SpeedHQ formats (SHQ0–5, 7, and 9),
including correct decoding of the alpha channel.
1080p is decoded in 142 fps on one core of my i7-4600U (2.1 GHz Haswell),
about evenly split between bitstream reader and IDCT. There is currently
no attempt at slice or frame threading, even though the format trivially
supports both.
NewTek very helpfully provided a full set of SHQ samples, as well as
source code for an SHQ2 encoder (not included) and assistance with
understanding some details of the format.
This is what gimp, ImageMagick and FreeImage do and what the
Adobe Photoshop file format specification suggests.
Fixes a sample from ticket #6045.
Reviewed-by: Martin Vignali
The nvidia 375.xx driver introduces support for P016 output surfaces,
for 10bit and 12bit HEVC content (it's also the first driver to support
hardware decoding of 12bit content).
The cuvid api, as far as I can tell, only declares one output format
that they appear to refer to as P016 in the driver strings. Of course,
10bit content in P016 is identical to P010, and it is useful for
compatibility purposes to declare the format to be P010 to work with
other components that only know how to consume P010 (and to avoid
triggering swscale conversions that are lossy when they shouldn't be).
For simplicity, this change does not maintain the previous ability
to output dithered NV12 for 10/12 bit input video - the user will need
to update their driver to decode such videos.
This is in the same the same vein as c981b1145a.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The new decode API allows for m:n decode patterns, which is what
you need to use this hardware in a sane way. There are so many
situations where 1:1 doesn't happen naturally that it's a miracle
I got it working as well as I did.
With this change, we can throw all of the crazy heuristics and
sleeps(!) out, and things work correctly.
The slice index expected by D3D11VA is the one from the texture not from the
array or texture/slices.
In VLC the slices we provide the decoder don't start from 0 and thus pictures
appear in bogus order. With possible crashes and corruptions when using an
invalid index.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* commit '32c8359093d1ff4f45ed19518b449b3ac3769d27':
lavc: export the timestamps when decoding in AVFrame.pts
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>