The bitstream filters do not work with merged in side data
This leaves the input packet split if it is being split.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: James Almer <jamrial@gmail.com>
This reverts commit fba2a8a254.
The changes were right for av_write_frame() but not for av_interleaved_write_frame().
The following commit will fix this in a simpler way.
Signed-off-by: James Almer <jamrial@gmail.com>
Similarly, merge it back before returning.
Fixes ticket #5927.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: James Almer <jamrial@gmail.com>
This allows a consumer to run the muxer's init function without actually
writing the header, which is useful in chained muxers that support
automatic bitstream filtering.
This is mostly useful for muxers that wrap other muxers, such as dashenc
and segment. The actual duplicated bitstream filtering is largely harmless,
but delaying the header can cause problems when the muxer intended the header
to be written to a separate file.
Restore original timestamps in write_packet() if the
actual write operation was not successfull. This allows
to pass the same packet to nonblocking muxer repeatedly
without corrupting the timestamps.
Signed-off-by: Jan Sebechlebsky <sebechlebskyjan@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* commit 'e1eb0fc960163402bbb4e630185790488f7d28ed':
movenc: Use packets in interleaving queues for the duration at the end of fragments
Merged-by: Matthieu Bouron <matthieu.bouron@stupeflix.com>
* commit 'db7968bff4851c2be79b15b2cb2ae747424d2fca':
avio: Allow custom IO users to get labels for the output bytestream
Merged-by: Matthieu Bouron <matthieu.bouron@stupeflix.com>
Docs clearly states that av_write_trailer should only be called if
avformat_write_header was successful, therefore we have to deinit if we return
failure.
Signed-off-by: Marton Balint <cus@passwd.hu>
As long as caller only writes packets using av_interleaved_write_frame
with no manual flushing, this should allow us to always have accurate
durations at the end of fragments, since there should be at least
one queued packet in each stream (except for the stream where the
current packet is being written, but if the muxer itself does the
cutting of fragments, it also has info about the next packet for that
stream).
Signed-off-by: Martin Storsjö <martin@martin.st>
This allows callers with avio write callbacks to get the bytestream
positions that correspond to keyframes, suitable for live streaming.
In the simplest form, a caller could expect that a header is written
to the bytestream during the avformat_write_header, and the data
output to the avio context during e.g. av_write_frame corresponds
exactly to the current packet passed in.
When combined with av_interleaved_write_frame, and with muxers that
do buffering (most muxers that do some sort of fragmenting or
clustering), the mapping from input data to bytestream positions
is nontrivial.
This allows callers to get directly information about what part
of the bytestream is what, without having to resort to assumptions
about the muxer behaviour.
One keyframe/fragment/block can still be split into multiple (if
they are larger than the aviocontext buffer), which would call
the callback with e.g. AVIO_DATA_MARKER_SYNC_POINT, followed by
AVIO_DATA_MARKER_UNKNOWN for the second time it is called with
the following data.
Signed-off-by: Martin Storsjö <martin@martin.st>
Currently, AVStream contains an embedded AVCodecContext instance, which
is used by demuxers to export stream parameters to the caller and by
muxers to receive stream parameters from the caller. It is also used
internally as the codec context that is passed to parsers.
In addition, it is also widely used by the callers as the decoding (when
demuxer) or encoding (when muxing) context, though this has been
officially discouraged since Libav 11.
There are multiple important problems with this approach:
- the fields in AVCodecContext are in general one of
* stream parameters
* codec options
* codec state
However, it's not clear which ones are which. It is consequently
unclear which fields are a demuxer allowed to set or a muxer allowed to
read. This leads to erratic behaviour depending on whether decoding or
encoding is being performed or not (and whether it uses the AVStream
embedded codec context).
- various synchronization issues arising from the fact that the same
context is used by several different APIs (muxers/demuxers,
parsers, bitstream filters and encoders/decoders) simultaneously, with
there being no clear rules for who can modify what and the different
processes being typically delayed with respect to each other.
- avformat_find_stream_info() making it necessary to support opening
and closing a single codec context multiple times, thus
complicating the semantics of freeing various allocated objects in the
codec context.
Those problems are resolved by replacing the AVStream embedded codec
context with a newly added AVCodecParameters instance, which stores only
the stream parameters exported by the demuxers or read by the muxers.
commit "avpacket: Deprecate av_dup_packet" broke the use
av_interleaved_write_uncoded_frame as any input uncoded frame has an
invalid packet size that will crash when av_packet_ref tries to allocate
'size' new memory. Since the packet is a temporary created within mux.c
itself it can be used directly without needing a new ref.
Signed-off-by: Matt Oliver <protogonoi@gmail.com>
This solves the problem discussed in https://ffmpeg.org/pipermail/ffmpeg-devel/2015-September/179238.html
by allowing AVCodec::write_header to be delayed until after packets have been
run through required bitstream filters in order to generate global extradata.
It also provides a mechanism by which a muxer can add a bitstream filter to a
stream automatically, rather than prompting the user to do so.
All encoders set pts and dts properly now (and have been doing that for
a while), so there is no good reason to do any timestamp guessing in the
muxer.
The newly added AVStreamInternal will be later used for storing all the
private fields currently living in AVStream.
It is well known that fabs and fabsf are at least as fast and sometimes
faster than the FFABS macro, at least on the gcc+glibc combination.
For instance, see the reference:
http://patchwork.sourceware.org/patch/6735/.
This was a patch to glibc in order to remove their usages of a macro.
The reason essentially boils down to fabs using the __builtin_fabs of
the compiler, while FFABS needs to infer to not use a branch and to
simply change the sign bit. Usually the inference works, but sometimes
it does not. This may be easily checked by looking at the asm.
This also has the added benefit of reducing macro usage, which has
problems with side-effects.
Note that avcodec is not handled here, as it is huge and
most things there are integer arithmetic anyway.
Tested with FATE.
Reviewed-by: Clément Bœsch <u@pkh.me>
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
* commit '948f3c19a8bd069768ca411212aaf8c1ed96b10d':
lavc: Make AVPacket.duration int64, and deprecate convergence_duration
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
* commit '01bcc2d5c23fa757d163530abb396fd02f1be7c8':
lavc: Drop deprecated destruct_packet related functions
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>