* commit 'e1eb0fc960163402bbb4e630185790488f7d28ed':
movenc: Use packets in interleaving queues for the duration at the end of fragments
Merged-by: Matthieu Bouron <matthieu.bouron@stupeflix.com>
As long as caller only writes packets using av_interleaved_write_frame
with no manual flushing, this should allow us to always have accurate
durations at the end of fragments, since there should be at least
one queued packet in each stream (except for the stream where the
current packet is being written, but if the muxer itself does the
cutting of fragments, it also has info about the next packet for that
stream).
Signed-off-by: Martin Storsjö <martin@martin.st>
Also, make every addition except for sidedata part of version 1 instead of the
new version 2.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
It will be used by text subtitle demuxers to construct format instructions
straight into extradata. They all currently a similar function that accepts
an AVCodecContext instead.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
This can be used for formats which write all format metadata as string to
files, therefore non-standard creation times such as 'now' will be parsed.
The standardized creation time is UTC ISO 8601 with microsecond precision.
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
Currently, AVStream contains an embedded AVCodecContext instance, which
is used by demuxers to export stream parameters to the caller and by
muxers to receive stream parameters from the caller. It is also used
internally as the codec context that is passed to parsers.
In addition, it is also widely used by the callers as the decoding (when
demuxer) or encoding (when muxing) context, though this has been
officially discouraged since Libav 11.
There are multiple important problems with this approach:
- the fields in AVCodecContext are in general one of
* stream parameters
* codec options
* codec state
However, it's not clear which ones are which. It is consequently
unclear which fields are a demuxer allowed to set or a muxer allowed to
read. This leads to erratic behaviour depending on whether decoding or
encoding is being performed or not (and whether it uses the AVStream
embedded codec context).
- various synchronization issues arising from the fact that the same
context is used by several different APIs (muxers/demuxers,
parsers, bitstream filters and encoders/decoders) simultaneously, with
there being no clear rules for who can modify what and the different
processes being typically delayed with respect to each other.
- avformat_find_stream_info() making it necessary to support opening
and closing a single codec context multiple times, thus
complicating the semantics of freeing various allocated objects in the
codec context.
Those problems are resolved by replacing the AVStream embedded codec
context with a newly added AVCodecParameters instance, which stores only
the stream parameters exported by the demuxers or read by the muxers.
This can be made more efficient, but first and the main goal of this change is to
store it at all
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This also deprecates our old duplicated callbacks.
* commit '9f61abc8111c7c43f49ca012e957a108b9cc7610':
lavf: allow custom IO for all files
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Some (de)muxers open additional files beyond the main IO context.
Currently, they call avio_open() directly, which prevents the caller
from using custom IO for such streams.
This commit adds callbacks to AVFormatContext that default to
avio_open2()/avio_close(), but can be overridden by the caller. All
muxers and demuxers using AVIO are switched to using those callbacks
instead of calling avio_open()/avio_close() directly.
(de)muxers that use the URLProtocol layer directly instead of AVIO
remain unconverted for now. This should be fixed in later commits.
This solves the problem discussed in https://ffmpeg.org/pipermail/ffmpeg-devel/2015-September/179238.html
by allowing AVCodec::write_header to be delayed until after packets have been
run through required bitstream filters in order to generate global extradata.
It also provides a mechanism by which a muxer can add a bitstream filter to a
stream automatically, rather than prompting the user to do so.
All encoders set pts and dts properly now (and have been doing that for
a while), so there is no good reason to do any timestamp guessing in the
muxer.
The newly added AVStreamInternal will be later used for storing all the
private fields currently living in AVStream.
Move field to internal part of AVStream and struct to internal.h
Reviewed-by: Andreas Cadhalpun <andreas.cadhalpun@googlemail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* commit '4227e4fe7443733fb906f6fb6c265105e8269c74':
lavf: add a convenience function for adding side data to a stream
Conflicts:
libavformat/internal.h
libavformat/replaygain.c
Merged-by: Michael Niedermayer <michaelni@gmx.at>
This reverts commit b9d08c77a4.
After taking MoveFileEx into use, we can replace files with renames
on windows as well.
Signed-off-by: Martin Storsjö <martin@martin.st>
* commit '960aff379da46dcaff61504a57714d4d4e758e41':
lavf: Use wchar functions for filenames on windows for mkdir/rmdir/rename/unlink
Conflicts:
libavformat/os_support.h
Merged-by: Michael Niedermayer <michaelni@gmx.at>
* commit 'b9d08c77a44390b0848c06f20bc0e9e951ba6a3c':
lavf: Don't try to update files atomically with renames on windows
Conflicts:
libavformat/dashenc.c
libavformat/hdsenc.c
libavformat/internal.h
libavformat/smoothstreamingenc.c
Merged-by: Michael Niedermayer <michaelni@gmx.at>