Both are codec properties and not encoder capabilities. The relevant
AVCodecDescriptor.props flags exist for this purpose.
Signed-off-by: James Almer <jamrial@gmail.com>
Previously, there was no way to flush an encoder such that after
draining, the encoder could be used again. We generally suggested
that clients teardown and replace the encoder instance in these
situations. However, for at least some hardware encoders, the cost of
this tear down/replace cycle is very high, which can get in the way of
some use-cases - for example: segmented encoding with nvenc.
To help address that use case, we added support for calling
avcodec_flush_buffers() to nvenc and things worked in practice,
although it was not clearly documented as to whether this should work
or not. There was only one previous example of an encoder implementing
the flush callback (audiotoolboxenc) and it's unclear if that was
intentional or not. However, it was clear that calling
avocdec_flush_buffers() on any other encoder would leave the encoder in
an undefined state, and that's not great.
As part of cleaning this up, this change introduces a formal capability
flag for encoders that support flushing and ensures a flush call is a
no-op for any other encoder. This allows client code to check if it is
meaningful to call flush on an encoder before actually doing it.
I have not attempted to separate the steps taken inside
avcodec_flush_buffers() because it's not doing anything that's wrong
for an encoder. But I did add a sanity check to reject attempts to
flush a frame threaded encoder because I couldn't wrap my head around
whether that code path was actually safe or not. As this combination
doesn't exist today, we'll deal with it if it ever comes up.
This fixes#6940
Although undocumented, AudioToolbox seems to require the data supplied
by the callback (i.e. ffat_encode_callback) being unchanged until the
next time the callback is called. In the old implementation, the
AVBuffer backing the frame is recycled after the frame is freed, and
somebody else (maybe the decoder) will write into the AVBuffer and
change the data. AudioToolbox then encodes some wrong data and noise
is produced. Retaining a frame reference solves this problem.
Signed-off-by: James Almer <jamrial@gmail.com>
Explicitly identify decoder/encoder wrappers with a common name. This
saves API users from guessing by the name suffix. For example, they
don't have to guess that "h264_qsv" is the h264 QSV implementation, and
instead they can just check the AVCodec .codec and .wrapper_name fields.
Explicitly mark AVCodec entries that are hardware decoders or most
likely hardware decoders with new AV_CODEC_CAPs. The purpose is allowing
API users listing hardware decoders in a more generic way. The proposed
AVCodecHWConfig does not provide this information fully, because it's
concerned with decoder configuration, not information about the fact
whether the hardware is used or not.
AV_CODEC_CAP_HYBRID exists specifically for QSV, which can have software
implementations in case the hardware is not capable.
Based on a patch by Philip Langdale <philipl@overt.org>.
Merges Libav commit 47687a2f8a.
AudioConverterFillComplexBuffer() doesn't always call its callback. A frame
queue is used to prevent skipped audio samples.
Signed-off-by: Rick Kern <kernrj@gmail.com>
The build failure here is caused by the enum value not being defined, but
as long as we're on a newer SDK that has it, it's safe to use it even
when our deployment target is older. Setting the property will error, but
we're not failing on errors there.
- size variables were used in a confusing way
- incorrect size var use led to channel layouts not being set properly
- channel layouts were incorrectly mapped for >2-channel AAC
- bitrates not accepted by the encoder were discarded instead of being clamped
- some minor style/indentation fixes