This merges commits 8e2ea69135 and
096a8effa3 by Anton Khirnov, with the
following change:
- extract_extradata_check() is added to know if the codec is supported
by the bsf before trying to initialize it. This behaviour is similar to
the old AVCodecParser.split checks.
The FATE reference changes are due to the filtered out NAL units that
the old AVCodecParser.split implementation left alone.
Decoding is unchanged as the functions that parse extradata simply
ignored said unnecessary NAL units.
Signed-off-by: James Almer <jamrial@gmail.com>
This reverts commit 1c193ac1f9, reversing
changes made to 7ebc9f8df4.
Several FATE tests started failing after this merge, so it's reverted
until it can be properly fixed.
* commit '8e2ea691351c5079cdab245ff7bfa5c0f3e3bfe4':
lavf: use the new bitstream filter for extracting extradata
Merged-by: James Almer <jamrial@gmail.com>
* commit '785c25443b56adb6dbbb78d68cccbd9bd4a42e05':
movenc: Apply offsets on timestamps when peeking into interleaving queues
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
This allows a consumer to run the muxer's init function without actually
writing the header, which is useful in chained muxers that support
automatic bitstream filtering.
Thanks to Mathieu Malaterre <malat@debian.org> for reporting the
Que/Queue typo. (https://bugs.debian.org/839542)
Reviewed-by: Lou Logan <lou@lrcd.com>
Signed-off-by: Andreas Cadhalpun <Andreas.Cadhalpun@googlemail.com>
This also fixes a minor bug introduced in the codecpar conversion, where
the termination condition for extracting the extradata does not match
the actual extradata setting code. As a result, the packet durations
made up by lavf go back to their values before the codecpar conversion.
That is of little consequence since that code should eventually be
dropped completely.
Add ff_format_output_open utility function to wrap
io_open callback of AVFormatContext structure.
Signed-off-by: Jan Sebechlebsky <sebechlebskyjan@gmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
* commit 'e1eb0fc960163402bbb4e630185790488f7d28ed':
movenc: Use packets in interleaving queues for the duration at the end of fragments
Merged-by: Matthieu Bouron <matthieu.bouron@stupeflix.com>
As long as caller only writes packets using av_interleaved_write_frame
with no manual flushing, this should allow us to always have accurate
durations at the end of fragments, since there should be at least
one queued packet in each stream (except for the stream where the
current packet is being written, but if the muxer itself does the
cutting of fragments, it also has info about the next packet for that
stream).
Signed-off-by: Martin Storsjö <martin@martin.st>
Also, make every addition except for sidedata part of version 1 instead of the
new version 2.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
It will be used by text subtitle demuxers to construct format instructions
straight into extradata. They all currently a similar function that accepts
an AVCodecContext instead.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
This can be used for formats which write all format metadata as string to
files, therefore non-standard creation times such as 'now' will be parsed.
The standardized creation time is UTC ISO 8601 with microsecond precision.
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
Currently, AVStream contains an embedded AVCodecContext instance, which
is used by demuxers to export stream parameters to the caller and by
muxers to receive stream parameters from the caller. It is also used
internally as the codec context that is passed to parsers.
In addition, it is also widely used by the callers as the decoding (when
demuxer) or encoding (when muxing) context, though this has been
officially discouraged since Libav 11.
There are multiple important problems with this approach:
- the fields in AVCodecContext are in general one of
* stream parameters
* codec options
* codec state
However, it's not clear which ones are which. It is consequently
unclear which fields are a demuxer allowed to set or a muxer allowed to
read. This leads to erratic behaviour depending on whether decoding or
encoding is being performed or not (and whether it uses the AVStream
embedded codec context).
- various synchronization issues arising from the fact that the same
context is used by several different APIs (muxers/demuxers,
parsers, bitstream filters and encoders/decoders) simultaneously, with
there being no clear rules for who can modify what and the different
processes being typically delayed with respect to each other.
- avformat_find_stream_info() making it necessary to support opening
and closing a single codec context multiple times, thus
complicating the semantics of freeing various allocated objects in the
codec context.
Those problems are resolved by replacing the AVStream embedded codec
context with a newly added AVCodecParameters instance, which stores only
the stream parameters exported by the demuxers or read by the muxers.
This can be made more efficient, but first and the main goal of this change is to
store it at all
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This also deprecates our old duplicated callbacks.
* commit '9f61abc8111c7c43f49ca012e957a108b9cc7610':
lavf: allow custom IO for all files
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Some (de)muxers open additional files beyond the main IO context.
Currently, they call avio_open() directly, which prevents the caller
from using custom IO for such streams.
This commit adds callbacks to AVFormatContext that default to
avio_open2()/avio_close(), but can be overridden by the caller. All
muxers and demuxers using AVIO are switched to using those callbacks
instead of calling avio_open()/avio_close() directly.
(de)muxers that use the URLProtocol layer directly instead of AVIO
remain unconverted for now. This should be fixed in later commits.
This solves the problem discussed in https://ffmpeg.org/pipermail/ffmpeg-devel/2015-September/179238.html
by allowing AVCodec::write_header to be delayed until after packets have been
run through required bitstream filters in order to generate global extradata.
It also provides a mechanism by which a muxer can add a bitstream filter to a
stream automatically, rather than prompting the user to do so.
All encoders set pts and dts properly now (and have been doing that for
a while), so there is no good reason to do any timestamp guessing in the
muxer.
The newly added AVStreamInternal will be later used for storing all the
private fields currently living in AVStream.
Move field to internal part of AVStream and struct to internal.h
Reviewed-by: Andreas Cadhalpun <andreas.cadhalpun@googlemail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>