Currently, the new decoding API is pretty much just a wrapper around the
old deprecated one. This is problematic, since it interferes with making
full use of the flexibility added by the new API. The old API should
also be removed at some future point.
Reorganize the code so that the new send_packet/receive_frame functions
call the actual decoding directly and change the old deprecated
avcodec_decode_* functions into wrappers around the new API.
The new internal API for decoders is now changing as well. Before this
commit, it mirrors the public API, so the decoders need to implement
send_packet() and receive_frame() callbacks. This turns out to require
awkward constructs in both the decoders and the generic code. After this
commit, the decoders only implement the receive_frame() callback and
call a new internal function, ff_decode_get_packet() to obtain input
data, in the same manner to how the bitstream filters now work.
avcodec will now always make a reference to the input packet, which means
that non-refcounted input packets will be copied. Keeping the previous
behaviour, where this copy could sometimes be avoided, would make the
code significantly more complex and fragile for only dubious gains,
since packets are typically small and everyone who cares about
performance should use refcounted packets anyway.
The current code stores a pointer to the packet passed to the decoder,
which is then used during get_buffer() for timestamps and side data
passthrough. However, since this is a pointer to user data which we do
not own, storing it is potentially dangerous. It is also ill defined for
the new decoding API with split input/output.
Fix this problem by making an explicit internally owned copy of the
packet properties.
Currently it's exported as AVFrame.pkt_pts, which is also the only use
for that field. The reason it is done like this is that lavc used to
export various codec-specific "timing" information in AVFrame.pts, which
is not done anymore.
Since it is confusing to the callers to have a separate field which is
used only for decoder timestamps and nothing else, deprecate pkt_pts and
use just AVFrame.pts everywhere.
Until now, the decoding API was restricted to outputting 0 or 1 frames
per input packet. It also enforces a somewhat rigid dataflow in general.
This new API seeks to relax these restrictions by decoupling input and
output. Instead of doing a single call on each decode step, which may
consume the packet and may produce output, the new API requires the user
to send input first, and then ask for output.
For now, there are no codecs supporting this API. The API can work with
codecs using the old API, and most code added here is to make them
interoperate. The reverse is not possible, although for audio it might.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This API is intended to allow passing around codec parameters without
using full AVCodecContext (which also contains codec options and
encoder/decoder state).
In the unlikely situation the user decides to set ticks_per_frame
and timebase to a value large enough to overflow.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Use the new fields directly instead of the ones from AVPicture.
This removes a layer of indirection which serves no pratical purpose
whatsoever, and will help in removing AVPicture structure completely
later.
Every subtitle encoder/decoder seamlessly points to the new arrays,
so it is possible to deprecate AVSubtitleRect.pict.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
The generic code in utils.c sets the AVFrame.pkt_dts field from the
packet it was supposedly decoded. This does not have to be true for a
fully asynchronous decoder like mmaldec. It could be overwritten with an
incorrect value. Even if the decoder doesn't determine the DTS (but sets
it to AV_NOPTS_VALUE), it's impossible to determine a correct value in
utils.c.
Decoders can now be marked with FF_CODEC_CAP_SETS_PKT_DTS, in which case
utils.c won't overwrite the field. The decoders are expected to set this
field (even if they only set it to AV_NOPTS_VALUE).
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
The rationale is that coded_frame was only used to communicate key_frame,
pict_type and quality to the caller, as well as a few other random fields,
in a non predictable, let alone consistent way.
There was agreement that there was no use case for coded_frame, as it is
a full-sized AVFrame container used for just 2-3 int-sized properties,
which shouldn't even belong into the AVCodecContext in the first place.
The appropriate AVPacket flag can be used instead of key_frame, while
quality is exported with the new AVPacketSideData quality factor.
There is no replacement for the other fields as they were unreliable,
mishandled or just not used at all.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Allocating coded_frame is what most encoders do anyway, so it makes
sense to always allocate and free it in a single place. Moreover a lot
of encoders freed the frame with av_freep() instead of the correct API
av_frame_free().
This bring uniformity to encoder behaviour and prevents applications
from erroneusly accessing this field when not allocated. Additionally
this helps isolating encoders that export information with coded_frame,
and heavily simplifies its deprecation.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>