The spec says
9: Interlaced with bottom field displayed first and top field stored first
14: Interlaced with top field displayed first and bottom field stored first
And avcodec.h states
AV_FIELD_TB, //< Top coded first, bottom displayed first
AV_FIELD_BT, //< Bottom coded first, top displayed first
Signed-off-by: James Almer <jamrial@gmail.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
As long as caller only writes packets using av_interleaved_write_frame
with no manual flushing, this should allow us to always have accurate
durations at the end of fragments, since there should be at least
one queued packet in each stream (except for the stream where the
current packet is being written, but if the muxer itself does the
cutting of fragments, it also has info about the next packet for that
stream).
Signed-off-by: Martin Storsjö <martin@martin.st>
This allows callers with avio write callbacks to get the bytestream
positions that correspond to keyframes, suitable for live streaming.
In the simplest form, a caller could expect that a header is written
to the bytestream during the avformat_write_header, and the data
output to the avio context during e.g. av_write_frame corresponds
exactly to the current packet passed in.
When combined with av_interleaved_write_frame, and with muxers that
do buffering (most muxers that do some sort of fragmenting or
clustering), the mapping from input data to bytestream positions
is nontrivial.
This allows callers to get directly information about what part
of the bytestream is what, without having to resort to assumptions
about the muxer behaviour.
One keyframe/fragment/block can still be split into multiple (if
they are larger than the aviocontext buffer), which would call
the callback with e.g. AVIO_DATA_MARKER_SYNC_POINT, followed by
AVIO_DATA_MARKER_UNKNOWN for the second time it is called with
the following data.
Signed-off-by: Martin Storsjö <martin@martin.st>
Using this requires setting the rw_timeout option to make it
terminate, alternatively using the interrupt callback (if used via
the API).
Signed-off-by: Martin Storsjö <martin@martin.st>
If set non-zero, this limits duration of the retry_transfer_wrapper()
loop, thus affecting ffurl_read*(), ffurl_write(). As soon as
one single byte is successfully received/transmitted, the timer
restarts.
This has further changes by Michael Niedermayer and Martin Storsjö.
Signed-off-by: Martin Storsjö <martin@martin.st>
Currently, AVStream contains an embedded AVCodecContext instance, which
is used by demuxers to export stream parameters to the caller and by
muxers to receive stream parameters from the caller. It is also used
internally as the codec context that is passed to parsers.
In addition, it is also widely used by the callers as the decoding (when
demuxer) or encoding (when muxing) context, though this has been
officially discouraged since Libav 11.
There are multiple important problems with this approach:
- the fields in AVCodecContext are in general one of
* stream parameters
* codec options
* codec state
However, it's not clear which ones are which. It is consequently
unclear which fields are a demuxer allowed to set or a muxer allowed to
read. This leads to erratic behaviour depending on whether decoding or
encoding is being performed or not (and whether it uses the AVStream
embedded codec context).
- various synchronization issues arising from the fact that the same
context is used by several different APIs (muxers/demuxers,
parsers, bitstream filters and encoders/decoders) simultaneously, with
there being no clear rules for who can modify what and the different
processes being typically delayed with respect to each other.
- avformat_find_stream_info() making it necessary to support opening
and closing a single codec context multiple times, thus
complicating the semantics of freeing various allocated objects in the
codec context.
Those problems are resolved by replacing the AVStream embedded codec
context with a newly added AVCodecParameters instance, which stores only
the stream parameters exported by the demuxers or read by the muxers.
This feature is mostly only used by NLE software, and is
both of dubious value being enabled by default, and a
possible security risk.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
All encoders set pts and dts properly now (and have been doing that for
a while), so there is no good reason to do any timestamp guessing in the
muxer.
The newly added AVStreamInternal will be later used for storing all the
private fields currently living in AVStream.
The old one is the result of the reverse engineering and guesswork.
The new one has been written following the now-available specification.
This work is part of Outreach Program for Women Summer 2014 activities
for the Libav project.
The fate references had to be changed because the old demuxer truncates
the last frame in some cases, the new one handles it properly.
The seek-test reference is changed because seeking works differently
in the new demuxer. When seeking, the packet is not read from the stream
directly, but it is rather constructed by the demuxer. That is why
position is -1 now in the reference.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
The current behavior may produce a different sequence of packets
after seeking, compared to demuxing linearly from the beginning.
This is because the MOV demuxer seeks in each stream individually,
based on timestamp, which may set each stream at a slightly different
position than if the file would have been read sequentially.
This makes implementing certain operations, such as segmenting,
quite hard, and slower than need be.
Therefore, add an option which retains the same packet sequence
after seeking, as when a file is demuxed linearly.
Similarly to what has been done for MOV, display XMP metadata only when
users explicitly require it.
The Extensible Metadata Platform tag can contain various kind of data
which are not strictly related to the video file, such as history of
edits and saves from the project file.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This delays writing the moov until the first fragment is written,
or can be flushed by the caller explicitly when wanted. If the first
sample in all streams is available at this point, we can write
a proper editlist at this point, allowing streams to start at
something else than dts=0. For AC3 and DNXHD, a packet is
needed in order to write the moov header properly.
This isn't added to the normal behaviour for empty_moov, since
the behaviour that ftyp+moov is written during avformat_write_header
would be changed. Callers that split the output stream into header+segments
(either by flushing manually, with the custom_frag flag set, or by
just differentiating between data written during avformat_write_header
and the rest) will need to be adjusted to take this option into use.
For handling streams that start at something else than dts=0, an
alternative would be to use different kinds of heuristics for
guessing the start dts (using AVCodecContext delay or has_b_frames
together with the frame rate), but this is not reliable and doesn't
necessarily work well with stream copy, and wouldn't work for getting
the right initialization data for AC3 or DNXHD either.
Signed-off-by: Martin Storsjö <martin@martin.st>
Since this structurally is quite different from normal RTP
(multiple streams are muxed into one single mpegts stream,
which is packetized into one single RTP session), it is kept
as a separate muxer.
Since this structurally also behaves differently than normal
RTP, all of the other muxers that do chained RTP muxing
(rtsp, sap, mp4) would need to be updated similarly to handle
this - in particular, creating one single rtp_mpegts muxer
for the whole presentation instead of one rtp muxer per stream.
Signed-off-by: Martin Storsjö <martin@martin.st>
The packetizer only supports splitting at GOB headers - if
such aren't available frequently enough, it splits at any
random byte offset (not at a macroblock boundary either, which
would be allowed by the spec) and sends a payload header pretend
that it starts with a GOB header.
As long as a receiver doesn't try to handle such cases cleverly
but just drops broken frames, this shouldn't matter too much
in practice.
Signed-off-by: Martin Storsjö <martin@martin.st>
The Extensible Metadata Platform tag can contain various kind of data
which are not strictly related to the video file, such as history of edits
and saves from the project file. So display XMP metadata only when the
user explicitly requires it.
Based on a patch by Marek Fort <marek.fort@chyronhego.com>.
This is mostly to serve as a reference example on how to segment
the output from the mp4 muxer, capable of writing the segment
list in four different ways:
- SegmentTemplate with SegmentTimeline
- SegmentTemplate with implicit segments
- SegmentList with individual files
- SegmentList with one single file per track, and byte ranges
The muxer is able to serve live content (with optional windowing)
or create a static segmented MPD.
In advanced cases, users will probably want to do the segmenting
in their own application code.
Signed-off-by: Martin Storsjö <martin@martin.st>
A flag "dash" is added, which enables the necessary flags for
creating DASH compatible fragments.
When this is enabled, one sidx atom is written for each track
before every moof atom.
Signed-off-by: Martin Storsjö <martin@martin.st>
Previously we wrote decoding timestamps here, while the specs
say it should be presentation timestamps.
Signed-off-by: Martin Storsjö <martin@martin.st>
This is the same logic as is invoked on AVFMT_TS_NEGATIVE,
but which can be enabled manually, or can be enabled
in muxers which only need it in certain conditions.
Also allow using the same mechanism to force streams to start
at 0.
Signed-off-by: Martin Storsjö <martin@martin.st>
Similarly to the omit_tfhd_offset flag added in e7bf085b, this
avoids writing absolute byte positions to the file, making them
more easily streamable.
This is a new feature from 14496-12:2012, so application support
isn't necessarily too widespread yet (support for it in libav was
added in 20f95f21f in July 2014).
Signed-off-by: Martin Storsjö <martin@martin.st>
The -hls_allow_cache parameter enables explicitly setting the
EXT-X-ALLOW-CACHE tag in the manifest file. That tag indicates
whether the client MAY or MUST NOT cache downloaded media
segments for later replay.
Valid values are 1 (=YES) or 0 (=NO) and the EXT-X-ALLOW-CACHE
will not show in the manifest for other values (or if
-hls_allow_cache is not used.
Signed-off-by: Martin Storsjö <martin@martin.st>
The only flags, for now, indicate if metadata was updated and are set after each call to
av_read_frame(). This comes with the caveat that, on stream start, it might not be set properly
as packets might be buffered in AVFormatContext.packet_buffer before being given to the user
in av_read_frame().
Signed-off-by: Anton Khirnov <anton@khirnov.net>