Allows to set an intended target latency while streaming that clients can use
to measure when using low latency mode.
Signed-off-by: James Almer <jamrial@gmail.com>
In combination with the streaming option it constrains the value of a few elements,
to prevet clients from buffering too much data before starting presentation.
Signed-off-by: James Almer <jamrial@gmail.com>
Implemented as as a frag_duration muxer option and key=value entry in the
adaptation_sets muxer option. It has the same syntax as the seg_duration option.
A new frag_type option is also introduced to select the kind of fragmentation.
Signed-off-by: James Almer <jamrial@gmail.com>
Implemented as as a seg_duration key=value entry in the adaptation_sets muxer
option.
It has the same syntax as the global seg_duration option, and has precedence
over it if set.
Signed-off-by: James Almer <jamrial@gmail.com>
The Dash muxer uses submuxers and when one such submuxer has been allocated,
it is initially only stored in a temporary variable. Therefore it leaks
if an error happens between the allocation and storing it permanently.
This commit changes this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Reviewed-by: "Jeyapal, Karthick" <kjeyapal@akamai.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Reviewed-by: Jun Zhao <barryjzhao@tencent.com>
Reviewed-by: Jeyapal, Karthick <kjeyapal@akamai.com>
Signed-off-by: Steven Liu <lq@chinaffmpeg.org>
From https://aomediacodec.github.io/av1-isobmff/#codecsparam, the parameters
sample entry 4CC, profile, level, tier, and bitDepth are all mandatory fields.
All the other fields are optional, mutually inclusive (all or none).
Fixes ticket #8049
Signed-off-by: James Almer <jamrial@gmail.com>
codecpar->extradata is not going to change between packets. New extradata
is instead propagated using packet side data.
Use ff_alloc_extradata() as well.
Signed-off-by: James Almer <jamrial@gmail.com>
The return code is 64bit, so this is more correct, especially in case it
actually would be a file of such large size
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Apple doesn't have an official spec for LHLS. Meanwhile hls.js player folks are
trying to standardize a open LHLS spec. The draft spec is available in https://github.com/video-dev/hlsjs-rfcs/blob/lhls-spec/proposals/0001-lhls.md
This option will also try to comply with the above open spec, till Apple's spec officially supports it.
Applicable only when @var{streaming} and @var{hls_playlist} options are enabled.
This commit ensures that all (potentially, long) filesystem activity is
performed when the user calls av_write_trailer on the DASH libavformat
context, not when freeing the context. Also, this defers media segment
deletion until after the media trailers are written.
This commit ensures that all (potentially, long) filesystem activity is
performed when the user calls av_write_trailer on the DASH libavformat
context, not when freeing the context. Also, this defers media segment
deletion until after the media trailers are written.
When dashenc has to run for long duration(say 24x7 live stream), one can enable this option to ignore the io failure of few segment's upload due to an intermittent network issues.
When the network connection recovers dashenc will continue with the upload of the current segments, leading to the recovery of the stream.
The only native HLS implementation in the wild (Safari browser) doesn't
support WebM. And at least some MSE-based players (e.g. shaka-player)
cannot handle WebM media segments when playing HLS. So just skip non-mp4
streams from HLS manifests. Note that such streams will still be described
by the DASH manifest and therefore consumed by players supporting DASH.
When stream time bases are very fine grained (e.g. nanoseconds), 32-bit
segment duration may overflow for even for rather small segment duration
(about 4 seconds long). Therefore we use 64-bit values for segment duration.
The file name template options now support a new "$ext$" placeholder,
which is replaced with a filename extension specific for the selected
file format. This is useful for the new "auto" format mode, when
different streams may use different file formats, and it is not
possible to specify the correct file name extension exactly.
Resolves warnings in the log about webm segments not having webm extensions.
This commit restores the ability to create DASH streams with codecs
that require different containers that was lost after commit
2efdbf7367. It adds a new "auto" value for
the dash_segment_type option and makes it the default. When in this mode,
the segment format will be chosen based on the codec used in the stream:
webm for Vorbis, Opus, VP8 or VP9, mp4 otherwise.
Internally in ISOBMFF the FLAC-in-ISOBMFF draft uses "fLaC"
as the identifier for FLACSampleEntry, and there seems to be no
MPEG-DASH specification for the in-manifest identifier for FLAC.
After testing the browsers' implementations, it seems like all of
the major browser vendors have decided to utilize the MIME type for
FLAC ("audio/flac") as the identifier. This change set leads to
that string being utilized for FLAC streams instead of the sample
entry identifier ("fLaC"), which is the default behavior.
Verified by auri_ on IRC to play with the major browsers.
For HEVC streams, only the FourCC tag is written without profile, level etc.,
This is breaking playout support in native Safari.
Native Safari playout expects the full info in CODECS tag or None at all.
Fixes bug with HTTP DELETE when HTTP Persistent is ON.
Right now, HTTP Persistent connections is supported only for POSTs and PUTs.
HTTP DELETE will still open a new connection every time.
Tool mediastreamvalidator reports error "Variant media_[N].m3u8 is
missing audio group" for audio streams in HLS master playlist. As audio
streams are already listed in audio group, skip them as variant media
streams in master playlist.
SIDX atom being inserted for every MOOF atom increases the muxing overhead.
This behaviour can be disabled for chunked CMAF format by enabling Global SIDX option of mov muxer.
For example bitdepth should be printed as 10 instead of 0A. Thanks to Hendrik Leppkes for pointing this out
Signed-off-by: James Almer <jamrial@gmail.com>
Fixes bug id #7386
Muxer overhead calculations was intented for HLS playlist as Apple's mediastreamvalidator tests were failing.
But applying the same fix for DASH manifest proved counterproductive, as Bandwidth can be used for segment name templates.
Applicable only to webm output format.
By default all the segment filenames end with .m4s extension.
When someone chooses webm output format, we recommend they also override the relevant segment name options to end with .webm extension. This patch will issue a warning for he same
Right now segment file format is chosen to be either mp4 or webm based on the codec format.
This patch makes that choice configurable by the user, instead of being decided by the muxer.
Also with this change per-stream choice segment file format(based on codec type) is not possible.
All the output audio and video streams should be in the same file format.
Enables one to test possibly nonstandard formats such as Opus or
FLAC in ISOBMFF, among other things.
This becomes much more useful if output segment format becomes an
option, or if the WebM segment feature gets removed.
It has not ever been working and has not been validated,
Additionally, mention that the segment file names should be changed
to end with webm instead of m4s, which is utilized for ISOBMFF
fragments.
There is a separate muxer(webmdashenc.c) for supporting VP9+webm output in DASH.
Hence in this muxer we will focus on supporting VP9 in MP4
Have verified playout support of VP9+MP4 in Chrome and Firefox.
reference hls support fmp4 file from draft-pantos-http-live-streaming-20
the spec describes version 7 of hls protocol
Suggested-by: Ronak <ronak2121@yahoo.com>
Signed-off-by: Steven Liu <lq@chinaffmpeg.org>
The logic is applicable only when use_template is enabled and use_timeline
is disabled. The logic monitors the flow of segment indexes. If a streams's
segment index value is not at the expected real time position, then
the logic corrects that index value.
Typically this logic is needed in live streaming use cases. The network
bandwidth fluctuations are common during long run streaming. Each
fluctuation can cause the segment indexes fall behind the expected real
time position. Without this logic, players will not be able to consume
the content, even after encoder's network condition comes back to
normal state.
availability time of Nth segment = availabilityStartTime + (N*segment duration) - availabilityTimeOffset.
This field helps to reduce the latency by about a segment duration in streaming mode.
@availabilityStartTime specifies the anchor for the computation of the earliest
availability time (in UTC) for any Segment in the Media Presentation.
As per this requirement, the @AvailabilityStartTime should be set to the
wallclock time at which the first frame of the first segment begins encoding.
But, it was getting set only when the first segment was completely ready. Making
the required correction in this patch. This correction is mainly needed to reduce
the latency in live streaming use cases.
Calling 'write_manifest' from 'write_header' was causing creation of
first MPD with invalid values. Ex: zero @duration param value. Also,
the manifest files (MPD or M3U8s) should be created when at-least
one media frame is ready for consumption.
When use_template is enabled and use_timeline is disabled, typically
it is required to generate the segments at the configured segment duration
rate on an average. This commit is particularly needed to handle the
segmentation when video frame rates are fractional like 29.97 or 59.94 fps.
There are use cases where average segment duration needs to be configured
and muxer is expected to maintain the average segment duration. So, using
the name 'min_seg_duration' will be misleading. So, changing the parameter
name to 'seg_duration', where it can be minimum segment duration or average
segment duration based on the use-case. The additional updates needed for
this functinality are made the sub-sequent patches of this patch series.
Currently http end of chunk is signalled implicitly in dashenc_io_open().
This mean playlists http writes would have to wait upto a segment duration to signal end of chunk causing delays.
This patch will fix that problem and improve performance.
This is required for AV playout from master.m3u8.
Otherwise master.m3u8 lists only video-only and/or audio-only streams.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
This is to take full advantage of Common Media Application Format(CMAF).
Now server can generate one content and serve both HLS and DASH players.
Reviewed-by: Steven Liu <lq@onvideo.cn>