Since this structurally is quite different from normal RTP
(multiple streams are muxed into one single mpegts stream,
which is packetized into one single RTP session), it is kept
as a separate muxer.
Since this structurally also behaves differently than normal
RTP, all of the other muxers that do chained RTP muxing
(rtsp, sap, mp4) would need to be updated similarly to handle
this - in particular, creating one single rtp_mpegts muxer
for the whole presentation instead of one rtp muxer per stream.
Signed-off-by: Martin Storsjö <martin@martin.st>
The packetizer only supports splitting at GOB headers - if
such aren't available frequently enough, it splits at any
random byte offset (not at a macroblock boundary either, which
would be allowed by the spec) and sends a payload header pretend
that it starts with a GOB header.
As long as a receiver doesn't try to handle such cases cleverly
but just drops broken frames, this shouldn't matter too much
in practice.
Signed-off-by: Martin Storsjö <martin@martin.st>
* commit 'fccfc22d1f304aef42a0b960e4c1d55ce67107f5':
libavformat: Build hevc.o when building the RTP muxer
Merged-by: Michael Niedermayer <michaelni@gmx.at>
The RTP muxer enables the actual codepaths within sdp.c,
which depend on hevc.o since e5cfc8fd.
This fixes builds with --disable-everything --enable-muxer=rtp.
Signed-off-by: Martin Storsjö <martin@martin.st>
This is mostly to serve as a reference example on how to segment
the output from the mp4 muxer, capable of writing the segment
list in four different ways:
- SegmentTemplate with SegmentTimeline
- SegmentTemplate with implicit segments
- SegmentList with individual files
- SegmentList with one single file per track, and byte ranges
The muxer is able to serve live content (with optional windowing)
or create a static segmented MPD.
In advanced cases, users will probably want to do the segmenting
in their own application code.
Signed-off-by: Martin Storsjö <martin@martin.st>
This patch adds the ability to generate WebM DASH manifest XML using
ffmpeg. A sample command line would be as follows:
ffmpeg \
-f webm_dash_manifest -i video1.webm \
-f webm_dash_manifest -i video2.webm \
-f webm_dash_manifest -i audio1.webm \
-f webm_dash_manifest -i audio2.webm \
-map 0 -map 1 -map 2 -map 3 \
-c copy \
-f webm_dash_manifest \
-adaptation_sets “id=0,streams=0,1 id=1,streams=2,3” \
manifest.xml
It works by exporting necessary fields as metadata tags in matroskadec
and use those values to write the appropriate XML fields as per the WebM
DASH Specification [1]. Some ideas are adopted from webm-tools project
[2].
[1]
https://sites.google.com/a/webmproject.org/wiki/adaptive-streaming/webm-dash-specification
[2]
https://chromium.googlesource.com/webm/webm-tools/+/master/webm_dash_manifest/
Signed-off-by: Vignesh Venkatasubramanian <vigneshv@google.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
It has not been properly maintained for years and there is little hope
of that changing in the future.
It appears simpler to write a new replacement from scratch than
unbreaking it.
* commit '2dc265619a2fc9c6f9aff7ac2bcdbcb90e9610cb':
lavf: group dump functions together
Conflicts:
libavformat/utils.c
Merged-by: Michael Niedermayer <michaelni@gmx.at>