The FDK decoder is capable of producing mono and stereo downmix from
multichannel streams. These streams may contain metadata that control
the downmix process. The decoder requires an Ancillary Buffer in order to
correctly apply downmix in streams containing downmix Metadata. The
decoder does not have an API interface to inform of the presence of
Metadata in the stream, and therefore the Ancillary Buffer is always
allocated whenever a downmix is requested.
When downmixing multichannel streams, the decoder requires the output
buffer in aacDecoder_DecodeFrame call to be of fixed size in order to
hold the actual number of channels contained in the stream. For example,
for a 5.1ch to stereo downmix, the decoder requires that the output buffer
is allocated for 6 channels, regardless of the fact that the output is in
fact two channels.
Due to this requirement, the output buffer is allocated for the maximum
output buffer size in case a downmix is requested (and also during
decoder init). When a downmix is requested, the buffer used for output
during init will also be used for the entire duration the decoder is open.
Otherwise, the initial decoder output buffer is freed and the decoder
decodes straight into the output AVFrame.
Signed-off-by: Martin Storsjö <martin@martin.st>
* commit '1ac5a29b2e5ddeae068deb9d6e0e803a91941d4d':
imc: fix order of operations in coefficients read
Merged-by: Michael Niedermayer <michaelni@gmx.at>
They are not just inverses of each other.
This should restore behavior to before the introduction of framerate
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
They are not just inverses of each other.
This should restore behavior to before the introduction of framerate
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
They are not just inverses of each other.
This should restore behavior to before the introduction of framerate
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
* commit '7ea1b3472a61de4aa4d41b571e99418e4997ad41':
lavc: deprecate the use of AVCodecContext.time_base for decoding
Conflicts:
libavcodec/avcodec.h
libavcodec/h264.c
libavcodec/mpegvideo_parser.c
libavcodec/utils.c
libavcodec/version.h
Merged-by: Michael Niedermayer <michaelni@gmx.at>
* commit 'c1724623ce0433c6a9ee72133b1fd4db75ec7193':
vdpau: have av_vdpau_bind_context() fail on unsupported flag
Merged-by: Michael Niedermayer <michaelni@gmx.at>
When decoding, this field holds the inverse of the framerate that can be
written in the headers for some codecs. Using a field called 'time_base'
for this is very misleading, as there are no timestamps associated with
it. Furthermore, this field is used for a very different purpose during
encoding.
Add a new field, called 'framerate', to replace the use of time_base for
decoding.
Decoding acceleration may work even if the codec level is higher than
the stated limit of the VDPAU driver. Or the problem may be considered
acceptable by the user. This flag allows skipping the codec level
capability checks and proceed with decoding.
Applications should obviously not set this flag by default, but only if
the user explicitly requested this behavior (and presumably knows how
to turn it back off if it fails).
Signed-off-by: Anton Khirnov <anton@khirnov.net>
* commit '2df0c32ea12ddfa72ba88309812bfb13b674130f':
lavc: use a separate field for exporting audio encoder padding
Conflicts:
libavcodec/audio_frame_queue.c
libavcodec/avcodec.h
libavcodec/libvorbisenc.c
libavcodec/utils.c
libavcodec/version.h
libavcodec/wmaenc.c
Merged-by: Michael Niedermayer <michaelni@gmx.at>
* commit '1f29e5d7a2b0950f3b6820896e97e2c02e6a10a9':
h263dec: call get_format after setting resolution and profile
Merged-by: Michael Niedermayer <michaelni@gmx.at>
Currently, the amount of padding inserted at the beginning by some audio
encoders, is exported through AVCodecContext.delay. However
- the term 'delay' is heavily overloaded and can have multiple different
meanings even in the case of audio encoding.
- this field has entirely different meanings, depending on whether the
codec context is used for encoding or decoding (and has yet another
different meaning for video), preventing generic handling of the codec
context.
Therefore, add a new field -- AVCodecContext.initial_padding. It could
conceivably be used for decoding as well at a later point.
move the code after the existing NULL check
Fixes: signal_sigsegv_844d59_10_signal_sigsegv_a17bb7_366_mpegts_mpeg2video_mp2_dvbsub_topfield.rec
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>