Currently, AVStream contains an embedded AVCodecContext instance, which
is used by demuxers to export stream parameters to the caller and by
muxers to receive stream parameters from the caller. It is also used
internally as the codec context that is passed to parsers.
In addition, it is also widely used by the callers as the decoding (when
demuxer) or encoding (when muxing) context, though this has been
officially discouraged since Libav 11.
There are multiple important problems with this approach:
- the fields in AVCodecContext are in general one of
* stream parameters
* codec options
* codec state
However, it's not clear which ones are which. It is consequently
unclear which fields are a demuxer allowed to set or a muxer allowed to
read. This leads to erratic behaviour depending on whether decoding or
encoding is being performed or not (and whether it uses the AVStream
embedded codec context).
- various synchronization issues arising from the fact that the same
context is used by several different APIs (muxers/demuxers,
parsers, bitstream filters and encoders/decoders) simultaneously, with
there being no clear rules for who can modify what and the different
processes being typically delayed with respect to each other.
- avformat_find_stream_info() making it necessary to support opening
and closing a single codec context multiple times, thus
complicating the semantics of freeing various allocated objects in the
codec context.
Those problems are resolved by replacing the AVStream embedded codec
context with a newly added AVCodecParameters instance, which stores only
the stream parameters exported by the demuxers or read by the muxers.
Some (de)muxers open additional files beyond the main IO context.
Currently, they call avio_open() directly, which prevents the caller
from using custom IO for such streams.
This commit adds callbacks to AVFormatContext that default to
avio_open2()/avio_close(), but can be overridden by the caller. All
muxers and demuxers using AVIO are switched to using those callbacks
instead of calling avio_open()/avio_close() directly.
(de)muxers that use the URLProtocol layer directly instead of AVIO
remain unconverted for now. This should be fixed in later commits.
The ones left using av_gettime are NTP timestamps (for RTCP,
which is specified to send the actual current realtime clock
in RTCP SR packets), and the NUT muxer timestamper, which is
documented as using wallclock time.
Signed-off-by: Martin Storsjö <martin@martin.st>
This allows both the main playlist itself as well as the variant
playlists to handle redirects combined with relative URLs.
Signed-off-by: Martin Storsjö <martin@martin.st>
This allows the chained demuxer (or more precisely, the lavf
utility code) to better fill in timestamps on packets from
these, especially for cases where one stream is a raw ADTS
stream.
Signed-off-by: Martin Storsjö <martin@martin.st>
Also parse segment durations as floating point, which is allowed
since HLS version 3.
This is based on a patch by Zhang Rui.
Signed-off-by: Martin Storsjö <martin@martin.st>
When first_timestamp was stored as-is, its actual time base
wasn't known later in the seek function.
Additionally, the logic (from 795d9594cf) for scaling it
based on stream_index is flawed - stream_index in the seek
function only specifies which stream the seek timestamp refers
to, but obviously doesn't say anything about which stream
first_timestamp belongs to.
In the cases where stream_index was >= 0 and all streams had the
same time base, this didn't matter in practice.
Seeking taking first_timestamp into account is problematic
when one variant is mpegts (with real timestamps) and one variant
is raw ADTS (with timestamps only being accumulated packet
duration), where the variants start at totally different timestamps.
Signed-off-by: Martin Storsjö <martin@martin.st>
Without the information, an application may choose audio from one
variant and video from another variant, which leads to fetching two
variants from the network. This enables av_find_best_stream() to find
matching audio and video streams, so that only one variant is fetched.
Signed-off-by: Martin Storsjö <martin@martin.st>
Also adjust the streams timestamps according to their start
timestamp when comparing. This helps getting correctly interleaved
packets if one stream lacks timestamps (such as a plain ADTS
stream when the other variants are full mpegts) when the others
have timestamps that don't start from zero.
This probably doesn't work properly if such a stream is
temporarily disabled (via the discard flags) and then reenabled,
and such streams are hard to correctly sync against the other
streams as well - but this works better than before at least.
The segment number restriction makes sure all variants advance
roughly at the same pace as well.
Signed-off-by: Martin Storsjö <martin@martin.st>
If passing the end of one segment while initializing the
chained demuxer, the parent demuxer's streams aren't set up
yet, so we can't recheck the discard flags.
Signed-off-by: Martin Storsjö <martin@martin.st>
This serves as a safeguard; normally we want to use the dts
comparison to interleave packets from all active variants. If that
dts comparison for some reason doesn't work as intended, make sure
that all packets in all variants for a certain sequence number have
been returned before moving on to the next one.
Signed-off-by: Martin Storsjö <martin@martin.st>
Previously, we returned any error code except AVERROR_EOF to the
caller - only if AVERROR_EOF or 0 was returned, we proceeded to
the next segment.
With some setups of web servers, using Connection: close in https
and GnuTLS, we don't get a clean error code at the end of segments.
In those cases, just proceed to the next segment.
Tested-by: Antti Seppälä <a.seppala@gmail.com>
Signed-off-by: Martin Storsjö <martin@martin.st>
This avoids reading any old data in the AVIOContext buffer after
the seek, and indicates to the mpegts demuxer that we've seeked,
avoiding continuity check errors.
Signed-off-by: Martin Storsjö <martin@martin.st>
Enhance seeking by demuxing until the requested timestamp is
reached within the segment selected by the seek code using the
playlist info.
Some mpegts streams don't have dts set for all packets though,
this seeking method doesn't work well for that case.
Signed-off-by: Martin Storsjö <martin@martin.st>
When this demuxer was created, there didn't seem to be any
consensus of a common short name for this protocol. Now
the consensus seems to be to call it hls.
Signed-off-by: Martin Storsjö <martin@martin.st>