Rev #2: Fixes doubled header writing, checked FATE running without errors
Rev #3: Fixed coding style
This commit addresses the following scenario:
we are using ffmpeg to transcode or remux mkv (or something else) to mkv. The result is being streamed on-the-fly to an HTML5 client (streaming starts while ffmpeg is still running). The problem here is that the client is unable to detect the duration because the duration is only written to the mkv at the end of the transcoding/remoxing process. In matroskaenc.c, the duration is only written during mkv_write_trailer but not during mkv_write_header.
The approach:
FFMPEG is currently putting quite some effort to estimate the durations of source streams, but in many cases the source stream durations are still left at 0 and these durations are nowhere mapped to or used for output streams. As much as I would have liked to deduct or estimate output durations based on input stream durations - I realized that this is a hard task (as Nicolas already mentioned in a previous conversation). It would involve changes to the duration calculation/estimation/deduction for input streams and propagating these durations to output streams or the output context in a correct way.
So I looked for a simple and small solution with better chances to get accepted. In webmdashenc.c I found that a duration is written during write_header and this duration is taken from the streams' metadata, so I decided for a similar approach.
And here's what it does:
At first it is checking the duration of the AVFormatContext. In typical cases this value is not set, but: It is set in cases where the user has specified a recording_time or an end_time via the -t or -to parameters.
Then it is looking for a DURATION metadata field in the metadata of the output context (AVFormatContext::metadata). This would only exist in case the user has explicitly specified a metadata DURATION value from the command line.
Then it is iterating all streams looking for a "DURATION" metadata (this works unless the option "-map_metadata -1" has been specified) and determines the maximum value.
The precendence is as follows: 1. Use duration of AVFormatContext - 2. Use explicitly specified metadata duration value - 3. Use maximum (mapped) metadata duration over all streams.
To test this:
1. With explicit recording time:
ffmpeg -i file:"src.mkv" -loglevel debug -t 01:38:36.000 -y "dest.mkv"
2. Take duration from metadata specified via command line parameters:
ffmpeg -i file:"src.mkv" -loglevel debug -map_metadata -1 -metadata Duration="01:14:33.00" -y "dest.mkv"
3. Take duration from mapped input metadata:
ffmpeg -i file:"src.mkv" -loglevel debug -y "dest.mkv"
Regression risk:
Very low IMO because it only affects the header while ffmpeg is still running. When ffmpeg completes the process, the duration is rewritten to the header with the usual value (same like without this commit).
Signed-off-by: SoftWorkz <softworkz@hotmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The header was never installed and the function is only used in libavformat
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
* commit 'e3453fd44480d903338c663238bf280215dd9a07':
matroska: Write the field order information
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Adding early support for a subset of the proposed colour elements
according to the latest version of spec:
https://mailarchive.ietf.org/arch/search/?email_list=cellar&gbt=1&index=hIKLhMdgTMTEwUTeA4ct38h0tmE
Like matroskadec, I've left out elements for pix_fmt related things
as there still seems to be some discussion around these.
The new elements are exposed under strict experimental mode.
Signed-off-by: Neil Birkbeck <neil.birkbeck@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Currently, AVStream contains an embedded AVCodecContext instance, which
is used by demuxers to export stream parameters to the caller and by
muxers to receive stream parameters from the caller. It is also used
internally as the codec context that is passed to parsers.
In addition, it is also widely used by the callers as the decoding (when
demuxer) or encoding (when muxing) context, though this has been
officially discouraged since Libav 11.
There are multiple important problems with this approach:
- the fields in AVCodecContext are in general one of
* stream parameters
* codec options
* codec state
However, it's not clear which ones are which. It is consequently
unclear which fields are a demuxer allowed to set or a muxer allowed to
read. This leads to erratic behaviour depending on whether decoding or
encoding is being performed or not (and whether it uses the AVStream
embedded codec context).
- various synchronization issues arising from the fact that the same
context is used by several different APIs (muxers/demuxers,
parsers, bitstream filters and encoders/decoders) simultaneously, with
there being no clear rules for who can modify what and the different
processes being typically delayed with respect to each other.
- avformat_find_stream_info() making it necessary to support opening
and closing a single codec context multiple times, thus
complicating the semantics of freeing various allocated objects in the
codec context.
Those problems are resolved by replacing the AVStream embedded codec
context with a newly added AVCodecParameters instance, which stores only
the stream parameters exported by the demuxers or read by the muxers.
"language" is not an offical matroska tag.
Track languages are specified with the MATROSKA_ID_TRACKLANGUAGE ebml.
Writing the tag overrides the ebml specified language during playback with
libav and some other players.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
* commit '948f3c19a8bd069768ca411212aaf8c1ed96b10d':
lavc: Make AVPacket.duration int64, and deprecate convergence_duration
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
Note that convergence_duration had another meaning, one which was in
practice never used. The only real use for it was a 64 bit replacement
for the duration field. It's better just to make duration 64 bits, and
to get rid of it.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
And update the preference for the newer codecs now that the libraries
seem stable and widespread enough.
Bug-Id: 695
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Fixing small leaks that can occur when mkv_write_tracks fails in mkv_write_header
(e.g., if video track has unknown codec). Also changing mkv_write_seekhead to take
the MatroskaMuxContext to avoid having dangling pointers.
Signed-off-by: Neil Birkbeck <neil.birkbeck@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Compute individual stream durations in matroska muxer.
Write them as string tags in the same format as mkvmerge tool does.
Signed-off-by: Sasi Inguva <isasi@google.com>
* commit 'b14086ca38efa1a86cb0f0c6aa147b05f698877b':
mkv: Correctly report the latest packet had been flushed
Merged-by: Michael Niedermayer <michaelni@gmx.at>
Per matroska Block Structure [1], for keyframes 0th bit of the flag
should not be set (unlike SimpleBlocks). For Blocks, keyframes is
inferred by the absence of ReferenceBlock element (as done by
matroskadec). This CL writes the flag correctly and inserts the
ReferenceBlock element for non-keyframes. The timestamp inserted is
that of the immediately preceding frame (which is true for VP8 and VP9
- the only 2 codecs using the matroska block element as of now). It
also considers all non-video frames (audio, subtitles, metadata) to
be keyframes.
[1] http://www.matroska.org/technical/specs/index.html#block_structure
Signed-off-by: Vignesh Venkatasubramanian <vigneshv@google.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Generally, libavformat exports cover art pictures as video streams with
1 packet and AV_DISPOSITION_ATTACHED_PIC set. Only matroskadec exported
it as attachment with codec_id set to AV_CODEC_ID_MJPEG.
Obviously, this should be consistent, so change the Matroska demuxer to
export a AV_DISPOSITION_ATTACHED_PIC pseudo video stream.
Matroska muxing is probably incorrect too. I know that it can create
broken files with an audio track and just 1 video frame when e.g.
remuxing mp3 with APIC to mkv. But for now this commit does not change
anything about muxing, and also continues to write attachments with
AV_CODEC_ID_MJPEG should the muxer application have special knowledge
that the Matroska is broken in this way.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This patch adds support for WebM Live Muxing by adding a new WebM
Chunk muxer. It writes out live WebM Chunks which can be used for
playback using Live DASH Clients.
Please see muxers.texi for sample usage.
Signed-off-by: Vignesh Venkatasubramanian <vigneshv@google.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Generally, libavformat exports cover art pictures as video streams with
1 packet and AV_DISPOSITION_ATTACHED_PIC set. Only matroskadec exported
it as attachment with codec_id set to AV_CODEC_ID_MJPEG.
Obviously, this should be consistent, so change the Matroska demuxer to
export a AV_DISPOSITION_ATTACHED_PIC pseudo video stream.
Matroska muxing is probably incorrect too. I know that it can create
broken files with an audio track and just 1 video frame when e.g.
remuxing mp3 with APIC to mkv. But for now this commit does not change
anything about muxing, and also continues to write attachments with
AV_CODEC_ID_MJPEG should the muxer application have special knowledge
that the Matroska is broken in this way.
Fixes trac #4423.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>