We cannot deprecate it until the new parser API is in place, because of
the way libavformat works. But the majority of the users can already
simply replace it with avcodec_free_context(), which will simplify the
transition once it is finally deprecated.
This function is supposed to "reset" a codec context to a clean state so
that it can be opened again. The only reason it exists is to allow using
AVStream.codec as a decoding context (after it was already
opened/used/closed by avformat_find_stream_info()). Since that behaviour
is now deprecated, there is no reason for this function to exist
anymore.
Since AVCodecContext contains a lot of complex state, copying a codec
context is not a well-defined operation. The purpose for which it is
typically used (which is well-defined) is copying the stream parameters
from one codec context to another. That is now possible with through the
AVCodecParameters API. Therefore, there is no reason for
avcodec_copy_context() to exist.
Initialize the bit buffer with the correct size (amount of bits that will
be read) instead of relying on the bitstream reader overreading the
correct values.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Signed-off-by: Diego Biurrun <diego@biurrun.de>
For reasons we are not privy to, nvidia decided that the nvenc encoder
should apply aspect ratio compensation to 'DVD like' content, assuming that
the content is not BT.601 compliant, but needs to be BT.601 compliant. In
this context, that means that they make the following, questionable,
assumptions:
1) If the input dimensions are 720x480 or 720x576, assume the content has
an active area of 704x480 or 704x576.
2) Assume that whatever the input sample aspect ratio is, it does not account
for the difference between 'physical' and 'active' dimensions.
From these assumptions, they then conclude that they can 'help', by adjusting
the sample aspect ratio by a factor of 45/44. And indeed, if you wanted to
display only the 704 wide active area with the same aspect ratio as the full
720 wide image - this would be the correct adjustment factor, but what if you
don't? And more importantly, what if you're used to lavc not making this kind
of adjustment at encode time - because none of the other encoders do this!
And, what if you had already accounted for BT.601 and your input had the
correct attributes? Well, it's going to apply the compensation anyway!
So, if you take some content, and feed it through nvenc repeatedly, it
will keep scaling the aspect ratio every time, stretching your video out
more and more and more.
So, clearly, regardless of whether you want to apply bt.601 aspect ratio
adjustments or not, this is not the way to do it. With any other lavc
encoder, you would do it as part of defining your input parameters or do
the adjustment at playback time, and there's no reason by nvenc should
be any different.
This change adds some logic to undo the compensation that nvenc would
otherwise do.
nvidia engineers have told us that they will work to make this
compensation mechanism optional in a future release of the nvenc
SDK. At that point, we can adapt accordingly.
Signed-off-by: Philip Langdale <philipl@overt.org>
Reviewed-by: Timo Rothenpieler <timo@rothenpieler.org>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
The code needs only a few definitions from cuda.h, so define them
directly when CUDA is not enabled. CUDA is still required for accepting
HW frames as input.
Based on the code by Timo Rothenpieler <timo@rothenpieler.org>.
For some unknown reason enabling these causes proper CBR padding,
so as there are no known downsides just always enable them in CBR mode.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
There is no real advantage to listing some codecs or subsystems
separately simply because they are somehow "hw-accelerated", on the
contrary it makes them harder to find than in a plain alphabetically
ordered list.
Use the newly created vlc.h directly instead of including get_bits when needed.
The VLC and RL_VLC_ELEM structures are independent from the bitreader.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Widen the values from limited to full range and use BT.709 where it
should be used according to the video resolution:
SD is BT.601, HD is BT.709
Default to BT.709 due to most observed HDMV content being HD.
This uses a new MMAL feature, which limits the number of extra frames
that can be buffered within the decoder. VIDEO_MAX_NUM_CALLBACKS can
be defined as positive or negative number. Positive numbers are
absolute, and can lead to deadlocks if the user underestimates the
number of required buffers. Negative numbers specify the number of extra
buffers, e.g. -1 means no extra buffer, (-1-N) means N extra buffers.
Set a gratuitous default of -11 (N=10). This is much lower than the
firmware default, which appears to be 96.
This is backwards compatible, but needs a symbol only present in newer
firmware headers. (It's an enum item, so it requires a check in
configure.)
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Slight simplification. The result is the same. Also, change the
wording of the message as requested in patch review.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Fixes apparent mmal_port_disable() freezes in ffmmal_stop_decoder() when
calling ffmmal_decode() with flush semantics a large number of times in
a row.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
The assert in ffmmal_stop_decoder() could trigger sometimes. The
packets_buffered counter was indeed not correctly maintained, and
packets were not subtracted from it if they were still in the waiting
queue.
For some reason, this happened especially with VC-1.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Register mmaldec as mpeg2 decoder. Supporting mpeg2 in mmaldec is just a
matter of setting the correct MMAL_ENCODING on the input port. To ease the
addition of further supported mmal codecs a macro is introduced to generate
the decoder and decoder class structs.
Signed-off-by: Julian Scheel <julian@jusst.de>
Signed-off-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
There is no avpriv_atomic_get, instead avpriv_atomic_int_get is to be used for
integers. This fixes building mmaldec.
Signed-off-by: Julian Scheel <julian@jusst.de>
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
In such a case, decode the MBs in parallel without the loop filter, then
execute the filter serially.
The ref2frm array was previously moved to H264SliceContext. That was
incorrect, since it applies to all the slices and should properly be in
H264Context (it did not actually break decoding, since this distinction
only becomes relevant with slice threading and deblocking_filter=1,
which was not implemented before this commit). The ref2frm array is thus
moved back to H264Context.
It is always unconditionally initialized in decode_postinit() and then
immediately used in one place further below. All the other places where
it is accessed are just useless fluff.
Make the SPS/PPS parsing independent of the H264Context, to allow
decoupling the parser from the decoder. The change is modelled after the
one done earlier for HEVC.
Move the dequant buffers to the PPS to avoid complex checks whether they
changed and an expensive copy for frame threads.
Modifying global header extradata in encode_frame is an API violation
and only happens to work currently because mov writes its header
at the end of the file.
Heavily based off of a patch from 2012 by Nicolas George.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Right now they are the first encoders for those codecs in the list, so
they are selected when the caller requests a codec by id.
Since they require special treatment, they should not be selected by
default if there are other encoders (e.g. libx264/5) available.
This can only be used if the input data happens to be laid out
exactly correctly.
This might not be supported on all encoders, so only enable it
with an option, but enable it automatically on raspberry pi,
where it is known to be supported.
Signed-off-by: Martin Storsjö <martin@martin.st>
The raspberry pi uses the alternative API/ABI for OMX; this makes
such builds incompatible with all the normal OpenMAX implementations.
Since this can't easily be detected at configure time (one can
build for raspberry pi's OMX just fine using the generic, pristine
Khronos OpenMAX IL headers, no need for their own extensions),
require a separate configure switch for it instead.
The broadcom host library can't be unloaded once loaded and started;
the deinit function that it provides is a no-op, and after started,
it has got background threads running, so dlclosing it makes it
crash.
Signed-off-by: Martin Storsjö <martin@martin.st>