c801ab43c3 caused a regression: The stream
number is now parsed with strtoll without a fixed basis; as a
consequence, the "010" in a variant stream mapping like "a:010" is now
treated as an octal number (i.e. as eight, not ten). This was not
intended and may break some scripts, so this commit restores the old
behaviour.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Fixes: 22082/clusterfuzz-testcase-minimized-ffmpeg_DEMUXER_fuzzer-5688619118624768
Fixes: crash from V-codecs/Theora/theora_testsuite_broken/multi2.ogg
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Suggested-by: Lynne on IRC
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Previously, prompeg_write() would only report to caller that bytes we
written when a FEC packet was actually created. Not all RTP packets are
expected to generate a FEC packet however, so this behavior was causing
avio to retry writing the RTP packet, eventually forcing the FEC state
machine to send a FEC packet erroneously (and so breaking out of the
retry loop).
This was resulting in incorrect FEC data being generated, and far too
many FEC packets to be sent (~100% FEC overhead).
fix#7863
Signed-off-by: David Holroyd <david.holroyd@m2amedia.tv>
Fixes: division by zero
Fixes: 23162/clusterfuzz-testcase-minimized-ffmpeg_DEMUXER_fuzzer-4856420817436672
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: signed integer overflow: 9223372036854775807 - -1 cannot be represented in type 'long'
Fixes: 23167/clusterfuzz-testcase-minimized-ffmpeg_DEMUXER_fuzzer-6425051741290496
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Two kinds of errors can happen when working with dynamic buffers:
(Re)allocation errors or truncation errors (one has to truncate the
buffer to a size of INT_MAX because avio_close_dyn_buf() and
avio_get_dyn_buf() both return an int). Right now, avio_get_dyn_buf()
returns an empty buffer in either case. But given that
avio_get_dyn_buf() does not destroy the dynamic buffer, one can return
the buffer in case of truncation and let the user check the error flags
and decide for himself instead of hardcoding a single way to proceed
in case of truncation.
(This actually restores the behaviour from before commit
163bb9ac0af495a5cb95441bdb5c02170440d28c.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
This has originally been done in 568e18b15e
as a precaution against integer overflows, but it is actually easy to
support the full range of int without overflows.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
If adding two ints overflows, it doesn't matter whether the result will
be stored in an unsigned or not; and checking afterwards does not make it
retroactively defined.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
This causes regressions in end to end timestamps with mp3s and ffmpeg.
The revert is to avoid this regression in the 4.3 release
See: [FFmpeg-devel] [PATCH] Don't adjust start time for MP3 files; packets are not adjusted.
This reverts commit 460132c998.
Fixes: out of array access
Fixes: 22520/clusterfuzz-testcase-minimized-ffmpeg_DEMUXER_fuzzer-5100297658826752
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: regression since e983197cbc
Fixes: out of array read
Fixes: 22185/clusterfuzz-testcase-minimized-ffmpeg_DEMUXER_fuzzer-5662069073641472
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Lynne <dev@lynne.ee>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes regression since 9ad47762c1
Fixes: out of array access
Fixes: 22172/clusterfuzz-testcase-minimized-ffmpeg_DEMUXER_fuzzer-5658535590625280
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Lynne <dev@lynne.ee>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: out of array access
Fixes: 22686/clusterfuzz-testcase-minimized-ffmpeg_DEMUXER_fuzzer-5121369624018944
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Prevent codecpar->codec_id from getting out of sync with the codec instantiated for probing.
Signed-off-by: Samuel Foss <sfoss@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
e,g: the command:
ffprobe -show_format -i fate-suite/aac/foo.aac -loglevel 99 will
dump the trace message as follow when start_time is AV_NOPTS_VALUE
[aac @ 0x55bf8e1f3dc0] stream 0: start_time: -326791809695.818 duration: 2.174
[aac @ 0x55bf8e1f3dc0] format: start_time: -9223372036854.775 duration: 2.174 bitrate=120 kb/s
after this fix, will dump the start_time with "NOPTS".
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
Up until now, the HLS muxer uses av_strtok() to split an input string
controlling parameters of the VariantStreams and then duplicates
parts of this string containing parameters such as the language or the
name of the VariantStream. But these parts are proper zero-terminated
strings of their own that are never modified lateron, so one can simply
use the substring as-is without creating a copy. This commit implements
this.
The same also happened for the string controlling the closed caption
groups.
Furthermore, add const to indicate that the pointers to these substrings
are not used to modify them and also to indicate that these strings are
not allocated on their own.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Up until now, the HLS muxer duplicated a string for every VariantStream,
although neither the original nor the copies are ever modified. So use
the original directly and stop copying.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The old resync logic had some bugs, for example the packet size could stuck
into 192 bytes, because pos47_full was not updated for every packet, and for
unseekable inputs the resync logic simply skipped some 0x47 sync bytes,
therefore the calculated distance between sync bytes was a multiple of 188
bytes.
AVIO only buffers a single packet (for UDP/mpegts, that usually means 1316
bytes), so for every ten consecutive 188-byte MPEGTS packets there was always a
seek failure, and that caused the old code to not find the 188 byte pattern
across 10 consecutive packets.
This patch changes the custom logic to the one which is used when probing to
determine the packet size. This was already proposed as a FIXME a long time
ago...
Uses ff_get_wav_header() in riffdec.c
Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
7546ac2fee made it so that the start_time for mp3 files is
adjusted for skip_samples. However, this appears incorrect because
subsequent packet timestamps are not adjusted and skip_samples are
applied by deleting data from a packet without changing the timestamp.
E.g., we are told the start_time is ~25ms and we get a packet with a
timestamp of 0 that has had the skip_samples discarded from it. As such
rendering engines may incorrectly discard everything prior to the
25ms thinking that is where playback should officially start. Since the
samples were deleted without adjusting timestamps though, the true
start_time is still 0.
Other formats like MP4 with edit lists will adjust both the start
time and the timestamps of subsequent packets to avoid this issue.
Signed-off-by: Dale Curtis <dalecurtis@chromium.org>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
It avoids leaving dangling pointers behind in memory.
Also remove redundant checks for whether the URLContext to be closed is
already NULL.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Some flac muxers write truncated metadata picture size if the picture
data do not fit in 24 bits. Detect this by truncting the size found inside
the picture block and if it matches the block size use it and read rest
of picture data.
This workaround is only for flac files and not ogg files with flac
METADATA_BLOCK_PICTURE comments and it can be disabled with strict level
above normal. Currently there is a 500MB limit on truncate size to protect
from large memory allocations.
The truncation bug in lavf flacenc was fixed in e447a4d112
but based on existing broken files other unknown flac muxers seems to truncate also.
Before the fix a broken flac file for reproduction could be generated with:
ffmpeg -f lavfi -i sine -f lavfi -i color=red:size=2400x2400 -map 0:0 -map 1:0 -c✌️0 bmp -disposition:1 attached_pic -t 1 test.flac
Fixes ticket 6333
Signed-off-by: Anton Khirnov <anton@khirnov.net>
ff_id3v2_parse_apic/chapters/priv/priv_dict all had a parameter
extra_meta of type ID3v2ExtraMeta ** as if the functions wanted to make
*extra_meta point to something else. But they don't, so just use an
ID3v2ExtraMeta *.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Up until now, the ID3v2ExtraMeta structure (which is used when parsing
ID3v2 tags containing attached pictures, chapters etc.) contained a
pointer to separately allocated data that depended on the type of the
tag. Yet the difference of the sizes of the largest and the smallest of
these structures is fairly small, so that it is better to simply include
a union of all the possible types of tag-dependent structures in
ID3v2ExtraMeta. This commit implements this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
If the write_id3v2 option is set, the aiff muxer would write id3v2 tags
if there is global metadata or if there are attached pics to write.
Chapters are ignored in this check that precedes writing id3v2 tags.
Yet 47ac344970 added support for writing
chapters as id3v2 tags, so one should check for the existence of chapters,
too; otherwise the chapters would only be written in case there is
global metadata or an attached pic.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The Xiph foundation never standardized either Daala nor its mapping in Ogg,
and all files that were created are undecodable without knowledge of the
git hash.
The description of AVOutputFormat.init contains the statement that "this
method must not write output". Due to this, the webm_chunk muxer defers
opening the AVIOContext for the child muxer until avformat_write_header(),
i.e. there is no AVIOContext when the sub-muxer's avformat_init_output()
is called. But this violates the documentation of said function which
requires the AVFormatContext to have an already opened AVIOContext.
This commit fixes this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Don't use the functions for searching substrings when all one is
looking for is a char anyway. Given that there is already a standard
library function for "find last occurence of a char in a string" also
allows one to remove a custom loop.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The current parsing process for adaptation_sets does not guarantee
every adaptation set to contain at least one stream, because the loop
exits immediately as soon as the end of the string has been reached,
without checking whether the currently active adaptation set group is
lacking a stream. This would lead to segfaults lateron as the rest of
the code presumed that every adaptation set contains a stream. This
commit fixes this by erroring out when the last adaptation set group
is incomplete.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The WebM DASH manifest muxer uses a loop to parse the adaptation_sets
string (which is given by the user and governs which AVStreams are
mapped to what adaptation set) and the very beginning of this loop is
"if (*p == ' ') continue;". This of course leads to an infinite loop if
the condition is true. It is true if e.g. the string begins with ' ' or
if there are more than one ' ' between different adaptation set groups.
To fix this, the parsing process has been modified to consume the space
if it is at a place where it can legitimately occur, i.e. when a new
adaptation set group is expected. The latter restriction implies that an
error is returned if a space exists where none is allowed to exist.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The syntax of the adaptation_sets string by which the user determines
the mapping of AVStreams to adaptation sets is
"id=x,streams=a,b,c id=y,streams=d,e" (means: the streams with the
indices a, b and c belong to the adaptation set with id x). Yet there
was no check for whether these indices were actual numbers and if there
is a number whether it really extends to the next ',', ' ' or to the
end of the string or not. This commit adds a check for this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
In order to parse a number from a string, the WebM DASH manifest muxer
would duplicate (via heap-allocation) the part of the string that
contains the number, then read the number via atoi() and then free the
duplicate again. This has been replaced by simply using strtoll() (which
in contrast to atoi() has defined behaviour when the number is not
representable).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Since commit c5324d92c5 all custom
interleave_packet() functions always return clean packets (even on
error), so that unreferencing manually can be removed.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
AVStream.request_probe as well as AVStream.mux_ts_offset are below the
separator of public and private fields, so that a further "NOT PART OF
PUBLIC API" is redundant.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Move the copying of the frame to vos_data further up in the function,
so that when writing the actual frame data for the first frame, it's
clear that the stream really is in annex b format, for the cases where
we create extradata from the first frame.
Alternatively - we could invert the checks for bitstream format. If
extradata is missing, we can't pretend that the bitstream is in
mp4 form, because we can't even know the NAL unit length prefix size
in that case.
Also avoid creating extradata for AVC intra. If the track tag is
an AVC intra tag, don't copy the frame into vos_data - this matches
other existing cases of how vos_data and TAG_IS_AVCI interact in
other places.
Signed-off-by: Martin Storsjö <martin@martin.st>
av_stream_get_side_data() tells the caller whether a stream has side
data of a specific type; if present it can also tell the caller the size
of the side data via an optional argument. The Matroska muxer always
used this optional argument, although it doesn't really need the size,
as the relevant side-data are not buffers, but structures. So change
this.
Furthermore, relying on the size also made the code susceptible to
a quirk of av_stream_get_side_data(): It only sets the size argument if
it found side data of the desired type. mkv_write_video_color() checks
for side-data twice with the same variable for the size without resetting
the size in between; if the second type of side-data isn't present, the
size will still be what it was after the first call. This was not
dangerous in practice, as the check for the existence of the second
side-data compared the size with the expected size, so it would only be
problematic if lots of elements were to be added to AVContentLightMetadata.
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Some real-world sites use an authorization header with a bearer token; when
combined with lengthy request parameters to identify the video segment,
it's rather trivial these days to have a request body of more than 4k bytes.
MAX_URL_SIZE is hard-coded to 4k bytes in libavformat/internal.h, and
HTTP_HEADERS_SIZE is 4k as well in libavformat/http.h, so this patch increases
the buffer size to 8k, as that is the default request body limit in Apache, and
most other httpds seem to support at least as much, if not more.
Signed-off-by: Marton Balint <cus@passwd.hu>
Fixes: signed integer overflow: -9223372036854775808 - 45000 cannot be represented in type 'long'
Fixes: ticket8187
Found-by: Suhwan
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: signed integer overflow: 30000299 * 256 cannot be represented in type 'int'
Fixes: ticket8184
Found-by: Suhwan
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: out of array access
Fixes: stack-buffer-overflow-READ-0x0831fff1
Found-by: GalyCannon <galycannon@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The TTA muxer writes a seektable in a dynamic buffer as it receives
packets and when writing the trailer, closes the dynamic buffer using
avio_close_dyn_buf(), writes the seektable and frees the buffer. But
the TTA muxer already has a deinit function which unconditionally
calls ffio_free_dyn_buf() on the dynamic buffer, so switching to
avio_get_dyn_buf() means that one can remove the code to free the
buffer; furthermore, it also might save an allocation if the seektable
is so small that it fits into the dynamic buffer's write buffer or if
adding the padding that avio_close_dyn_buf() adds necessitated
reallocating of the underlying buffer.
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
ff_id3v2_free_extra_meta() takes a ID3V2ExtraMeta ** so that it can
already reset the pointer.
Reviewed-by: Jun Zhao <mypopy@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Sticking a full frame in the extradata works, as the code for writing
the avcC/hvcC extracts the relevant parameter set NAL units - provided
that they actually exist in the frame.
Some encoders don't provide split out extradata directly on init (or
at all). In particular, the MediaFoundation encoder wrapper doesn't
always (depending on the actual encoder device) - this is the case for
Qualcomm's HEVC encoder on SD835, and also on some QSV H264 encoders).
This only works for cases where the moov hasn't already been written
(e.g. when not writing fragmented mp4 with empty_moov, unless using
the delay_moov option).
Signed-off-by: Martin Storsjö <martin@martin.st>
2d8d554f15 added a new error condition
to mov_read_stsz() but forgot to free a temporary buffer when it
occurs.
Signed-off-by: Dale Curtis <dalecurtis@chromium.org>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
because it need be check for success, is should not
change the old way if it failure.
fix ticket: 8674
Signed-off-by: Steven Liu <liuqi05@kuaishou.com>
Each AttachedFile in Matroska can have a FileDescription element that
contains a human-friendly name for the attached file; yet this element
has been ignored up until now. This commit changes this and exports it
as title tag instead (the Matroska muxer mapped the title tag to the
AttachedFile element since support for Attachments was added).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The Matroska muxer writes the Chapters early when chapters were already
available when writing the header; in this case any tags pertaining to
these chapters get written, too.
Yet if no chapters had been supplied before writing the header, Chapters
can also be written when writing the trailer if any are supplied. Tags
belonging to these chapters were up until now completely ignored.
This commit changes this: Writing the tags belonging to chapters has
been moved to mkv_write_chapters(). If mkv_write_tags() has not been
called yet (i.e. when chapters are written when writing the header),
the AVIOContext for writing the ordinary Tags element is used, but not
output, as this is left to mkv_write_tags() in order to only write one
Tags element. Yet if mkv_write_tags() has already been called,
mkv_write_chapters() will output a Tags element of its own which only
contains the tags for chapters.
When chapters are available initially, the corresponding tags will now
be the first tags in the Tags element; but the ordering of tags in Tags
is irrelevant anyway.
This commit also makes chapter_id_offset local to mkv_write_chapters()
as it is used only there and not reused at all.
Potentially writing a second Tags element means that the maximum number
of SeekHead entries had to be incremented. All the changes to FATE
result from the ensuing increase in the amount of space reserved for the
SeekHead (21 bytes more).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
This is needed so that it can access mkv_write_tag() and mkv_check_tag()
without using forward declarations (which are unnecessary here).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Up until now, the Matroska muxer writes only one Tags level 1 element
and therefore using a certain place to store the dynamic buffer used for
writing it was hardcoded; yet the Matroska specifications allow an
unlimited amount of Tags elements and we have reason to write a second
one: If chapters are provided after writing the header, they are written
when writing the trailer; yet the corresponding tags are ignored. This
can be fixed by writing them in a second Tags element.
Also use a MatroskaMuxContext * instead of an AVFormatContext * as
parameter in mkv_write_tag() and mkv_write_tag_targets() as that is all
these functions use.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Mostly reindentation after the last commit. Also remove a variable that
is always zero; move another variable to a more local scope and don't
assign a value to a local variable immediately before leaving the function.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Mainly reindentation plus some reordering in MatroskaMuxContext;
moreover, use the IS_SEEKABLE() macro troughout the code.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
EBML numbers are variable length numbers: Only seven bits of every byte
are available to encode the number, the other bits encode the length of
the number itself. So an eight byte EBML number can only encode numbers
in the range 0..(2^56 - 1). And when using EBML numbers to encode the
length of an EBML element, the EBML number corresponding to 2^56 - 1 is
actually reserved to mean that the length of the corresponding element
is unknown.
And therefore put_ebml_length() asserted that the length it should
represent is < 2^56 - 1. Yet there was nothing that actually guaranteed
this to be true for the Segment (the main/root EBML element of a
Matroska file that encompasses nearly the whole file). This commit
changes this by checking in advance how big the length is and only
updating the number if it is representable at all; if not, the unknown
length element is not touched.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The Matroska muxer has a pair of functions designed to write master
elements whose exact length is not known in advance: start_ebml_master()
and end_ebml_master(). The first one of these would write the EBML ID of
the master element that is about to be started, reserve some bytes for
the length field and record the current position as well as how many
bytes were used for the length field. When writing the master's contents
is finished, end_ebml_master() gets the current position (at the end of
the master element), seeks to the length field using the recorded
position, writes the length field and seeks back to the end of the
master element so that one can continue writing other elements.
But if one wants to modify the content of the master element itself,
then the seek back is superfluous. This is the scenario that presents
itself when writing the trailer: One wants to update several elements
contained in the Segment master element (this is the main/root master
element of a Matroska file) that were already written when writing the
header. The current approach is to seek to the beginning of the file
to update the elements, then seek to the end, call end_ebml_master()
which immediately seeks to the beginning to write the length and seeks
back. The seek to the end (which has only been performed because
end_ebml_master() uses the initial position to determine the length
of the master element) and the seek back are of course superfluous.
This commit avoids these seeks by no longer using start/end_ebml_master()
to write the segment's length field. Instead, it is now written
manually. The new approach is: Seek to the beginning to write the length
field, then update the elements (in the order they appear in the file)
and seek back to the end.
This reduces the ordinary amount of seeks of the Matroska muxer to two
(ordinary excludes scenarios where one has big Chapters or Attachments
or where one writes the Cues at the front).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
If the AVIOContext for output was unseekable when writing the header,
no space for Cues would be reserved even if the reserve_index_space
option was used (because it is reasonable to expect that one can't seek
back to the beginning to write the Cues anyway). But if the AVIOContext
was seekable when writing the trailer, it was presumed that space for
the Cues had been reserved when the reserve_index_space option indicated
so even when it was not. As a result, the beginning of the file would be
overwritten.
This commit fixes this: If the reserve_index_space option had been used
and no space has been reserved in advance because of unseekability when
writing the header, then no attempt to write Cues will be performed
when writing the trailer; after all, writing them at the front is
impossible and writing them at the end is probably undesired.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
We won't be able to seek back to write the actual duration anyway.
FATE-tests using the md5pipe command had to be updated due to this change.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The Matroska muxer behaves differently in several ways when it thinks
that it is in unseekable/livestreaming mode: It does not add Cue entries
because they won't be written anyway for a livestream and it writes some
elements only preliminarily (with the intention to overwrite them with
an updated version at the end) when non-livestreaming etc.
There are two ways to set the Matroska muxer into livestreaming mode:
Setting an option or by providing an unseekable AVIOContext. Yet the
actual checks were not consistent:
If the AVIOContext was unseekable and no AAC extradata was available
when writing the header, writing the header failed; but if the AVIOContext
was seekable, it didn't, because the muxer expected to get the extradata
via packet side-data. Here the livestreaming option has not been checked,
although one can't use the updated extradata in case it is a livestream.
If the reserve_index_space option was used, space for writing Cues would
be reserved when writing the header unless the AVIOContext was
unseekable. Yet Cues were only written if the livestreaming option was
not set and the AVIOContext was seekable (when writing the trailer), so
if the AVIOContext was seekable and the livestreaming option set, the
reserved space would never be used at all.
If the AVIOContext was unseekable and the livestreaming option was not
set, it would be attempted to update the main length field at the end.
After all, it might be possible that the file is so short that it fits
into the AVIOContext's buffer in which case the seek back would work.
Yet this is dangerous: It might be that we are not dealing with a
simple output file, but that our output gets split into chunks and that
each of these chunks is actually seekable. In this case some part of the
last chunk (namely the eight bytes that have the same offset as the
length field had in the header) will be overwritten with what the muxer
wrongly believes to be the filesize.
(The livestreaming option has been added to deal with this scenario,
yet its documentation ("Write files assuming it is a live stream.")
doesn't make this clear at all. At least the segment muxer does not
set the option for live and given that the chances of successfully
seeking when the output is actually unseekable are slim, it is best to
not attempt to update the length field in the unseekable case at all.)
All these inconsistencies were fixed by treating the output as seekable
if the livestreaming option is not set and if the AVIOContext is
seekable. A macro has been used to enforce consistency and improve code
readability.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
If the Matroska muxer's AVIOContext was unseekable when writing the
header, but is seekable when writing the trailer, the code for writing
the trailer presumes that a dynamic buffer exists and tries to update
its content in order to overwrite data that has already been
preliminarily written when writing the header, yet said buffer doesn't
exist as it has been written finally and not preliminarily when writing
the header (because of the unseekability it was presumed that one won't
be able to update the data anyway).
This commit adds a check for this and also for a similar situation
involving updating extradata with new data from packet side-data.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The parsing process of the AVOpt-enabled string controlling the mapping
of input streams to variant streams is roughly as follows: Space and tab
separate variant stream group maps while the entries in each variant
stream group map are separated by ','.
The parsing process of each variant stream group proceeded as follows:
At first the number of occurences of "a:", "v:" and "s:" in each variant
stream group is calculated so that one can can allocate an array of
streams with this number of entries. Then the string is split along ','
and each substring is parsed. If such a substring starts with "a:", "s:"
or "v:" it is treated as stream specifier and (if there is a correct
number after ':') a stream of the variant stream is mapped to one of the
actual input streams.
Nothing actually guarantees that the number of streams allocated initially
equals the number of streams that are mapped to an actual input stream.
These numbers can differ if e.g. the name, the sgroup, agroup or ccgroup
of the variant stream contain "a:", "s:" or "v:".
The problem hereby is that the rest of the code presumes these numbers
to be equal and segfaults if it isn't (because the corresponding input
stream is NULL).
This commit fixes this by modifying the initial counting process to only
count occurences of "a:", "s:" or "v:" that are at the beginning or that
immediately follow a ','.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
This avoids accessing an old, no longer valid buffer.
Fixes: out of array access
Fixes: crash_audio-2020
Found-by: le wu <shoulewoba@gmail.com>
Reviewed-by: Marton Balint <cus@passwd.hu>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Call it directly from write_packets_common() instead of indirectly
through prepare_input_packet().
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
This commit stops using pkt->stream_index as index in an AVFormatContext's
streams array before actually comparing the value with the count of
streams in said array. 96e5e6abb9 used
pkt->stream_index in prepare_input_packet() before checking and
6406351222 did likewise in
write_packets_common().
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
If you have a file with multiple Metadata Keys, the second time you parse
the keys, you will re-alloc c->meta_keys without freeing the old one.
This change will avoid parsing all the consecutive Metadata keys.
Reviewed-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
A temporary heap array currently stores pids from all streams. It is
used to make sure there are no duplicated pids. However, this array is
not needed because the pids from past streams are stored in the
MpegTSWriteStream structs.
Reviewed-by: Marton Balint <cus@passwd.hu>
Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
Until now, we would have only attempted to utilize already decrypted
data if it was enough to fill the size of buffer requested, that could
very well be up to 32 kilobytes.
With keep-alive connections this would just lead to recv blocking
until rw_timeout had been reached, as the connection would not be
officially closed after each transfer. This would also lead to a
loop, as such timed out I/O request would just be attempted again.
By just returning the available decrypted data, keep-alive based
connectivity such as HLS playback is fixed with schannel.
The dec_buf seems to be properly managed between read calls,
and we have no logic to decrypt before attempting socket I/O.
Thus - until now - such data would not be decrypted in case of
connections such as HTTP keep-alive, as the recv call would
always get executed first, block until rw_timeout, and then get
retried by retry_transfer_wrapper.
Thus - if data is received - decrypt all of it right away. This way
it is available for the following requests in case they can be
satisfied with it.
For every variantstream vs, vs->packets_written is set to one, only to be
set to zero a few lines below. Given that the relevant structure has
been zeroed during the allocation, this commit removes both assignments.
A redundant initialization for vs->init_range_length has been removed as
well a few lines below. Given that the relevant structure has been
zeroed during the allocation, this commit removes both assignments. A
redundant initialization for vs->init_range_length has been removed as
well.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
By the av_strtok() description:
* On the first call to av_strtok(), s should point to the string to
* parse, and the value of saveptr is ignored. In subsequent calls, s
* should be NULL, and saveptr should be unchanged since the previous
* call.
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
The demuxer code assumes the existence of a video stream
Fixes: assertion failure
Fixes: 21512/clusterfuzz-testcase-minimized-ffmpeg_DEMUXER_fuzzer-5699660783288320
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
When the Ogg muxer writes a page, it has to do three things: It needs to
write a page header, then it has to actually copy the page data and then
it has to calculate and write a CRC checksum of both header as well as
data at a certain position in the page header.
To do this, the muxer used a dynamic buffer for both writing as well as
calculating the checksum via an AVIOContext's feature to automatically
calculate checksums on the data it writes. This entails an allocation of
an AVIOContext, of the opaque specific to dynamic buffers and of the
buffer itself (which may be reallocated multiple times) as well as
memcopying the data (first into the AVIOContext's small write buffer,
then into the dynamic buffer's big buffer).
This commit changes this: The page header is no longer written into a
dynamic buffer any more; instead the (small) page header is written into
a small buffer on the stack. The CRC is then calculated directly via
av_crc() on both the page header as well as the page data. Then both the
page header and the page data are written.
Finally, ogg_write_page() can now no longer fail, so it has been
modified to return nothing; this also fixed a bug in the only caller of
this function: It didn't check the return value.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Mainly includes reindentation and returning directly (i.e. without
a goto fail when possible).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
hls_init() would at first allocate the vtt_basename string, then
allocate the vtt_m3u8_name string followed by several operations that
may fail and then open the subtitles' output context. Yet upon freeing,
these strings were only freed when the subtitles' output context
existed, ensuring that they leak if something goes wrong between their
allocation and the opening of the subtitles' output context. So drop the
check for whether this output context exists.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
This fixes memleaks in instances such as:
a) When an allocation fails at one of the two places in hls_init() where
the error is returned immediately without goto fail first.
b) When an error happens when writing the header.
c) When an allocation fails at one of the three places in
hls_write_trailer() where the error is returned immediately without goto
fail first.
d) When one decides not to write the trailer at all (e.g. because of
errors when writing packets).
Furthermore, it removes code duplication and allows to return
immediately, without goto fail first.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Several variables which are only used when the HLS_SINGLE_FILE flag is
unset have been set even when this flag is set. This has been changed.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The Matroska specification allows multiple (level 1) Tags elements per
file, yet our demuxer didn't: While it parsed any amount of Tags
elements it found in front of the Clusters (albeit with warnings because
of duplicate elements), it would treat any Tags element only referenced
via a SeekHead entry as already parsed if any Tags element has already
been parsed; therefore this Tags element would not be parsed at all.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
There can be more than one SeekHead in a Matroska file, but most of the
other level 1 elements can only occur once.* Therefore the Matroska
demuxer only allows one entry per ID in its internal list of level 1
elements known to it; the only exception to this are SeekHeads.
The only exception to this are SeekHeads: When one is encountered
(either directly or in the list of entries read from SeekHeads),
a new entry in the list of known level-1 elements is always added,
even when this entry is actually already known.
This leads to lots of seeks in case of circular SeekHeads: Each time a
SeekHead is parsed, a new entry for a SeekHead will be added to the list
of entries read from SeekHeads. The exception for SeekHeads mentioned
above now implies that this SeekHead will always appear new and unparsed
and parsing will be attempted. This continued until the list of known
level-1 elements is full.
Fixing this is pretty simple: Don't add a new entry for a SeekHead if
its position matches the position of an already known SeekHead.
*: Actually, there can be multiple Tags and several other level 1
elements are "identically recurring" which means they may be resent
multiple times, but each instance must be absolutely identical to the
previous.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
A Seek element in a Matroska SeekHead should contain a SeekID and a
SeekPosition element and upon reading, they should be sanitized:
Given that IDs are restricted to 32 bit, longer SeekIDs should be treated
as invalid. Instead currently the lower 32 bits have been used.
For SeekPosition, no checks were performed for the element to be
present and if present, whether it was excessively large (i.e. the
absolute file position described by it exceeding INT64_MAX). The
SeekPosition element had a default value of -1 which means that a check
seems to have been intended; but it was not implemented. This commit adds
a check for overflow to the calculation of the absolute file position of
the referenced level 1 elements.
Using -1 (i.e. UINT64_MAX) as default value for SeekPosition implies that
a Seek element without SeekPosition will run afoul of this check.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Generic retime functionality is replaced by a few lines of code directly in the
muxers which used it, which seems a lot easier to understand and this way the
retiming is not dependant of the input durations.
Also remove retimeinterleave, since it is not used by anything anymore.
Signed-off-by: Marton Balint <cus@passwd.hu>
And rename it to retimeinterleave, use the pcm_rechunk bitstream filter for
rechunking.
By seperating the two functions we hopefully get cleaner code.
Signed-off-by: Marton Balint <cus@passwd.hu>
Previously only 1:1 bitstream filters were supported, the end of the stream was
not signalled to the bitstream filters and time base changes were ignored.
This change also allows muxers to set up bitstream filters regardless of the
autobsf flag during write_header instead of during check_bitstream and those
bitstream filters will always be executed.
Signed-off-by: Marton Balint <cus@passwd.hu>
avformat_alloc_output_context2() already sets the oformat member, so
that there is no reason to overwrite it again with the value it already
has.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
because the offset should use one byte
Reviewed-by: Zhao Jun <barryjzhao@tencent.com>
Reported-by: Zhao Jun <barryjzhao@tencent.com>
Signed-off-by: Steven Liu <liuqi05@kuaishou.com>
Failures of the allocations that happen under the hood when using dynamic
buffers are usually completely unchecked and the Matroska muxer is no
exception to this.
The API has its part in this, because there is no documented way to
actually check for errors: The return value of both avio_get_dyn_buf()
as well as avio_close_dyn_buf() is only documented as "the length of
the byte buffer", so that using this to return errors would be an API
break.
Therefore this commit uses the only reliable way to check for errors
with avio_get_dyn_buf(): The AVIOContext's error flag. (This is one of
the advantages of avio_get_dyn_buf(): By not destroying the AVIOContext
it is possible to inspect this value.) Checking whether the size or the
pointer vanishes is not enough as it does not check for truncated output
(the dynamic buffer API is int based and so has to truncate the buffer
even when enough memory would be available; it's current actual limit is
even way below INT_MAX).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
If one already has the contents of a master elements in a buffer of
known size, then writing a EBML master element is no different from
writing an EBML binary element. It is overtly complicated to use
start/end_ebml_master() as these functions first write an unkown-length
size field of the appropriate length, then write the buffer's contents,
followed by a seek to the length field to overwrite it with the real
size (obtained via avio_tell() although it was already known in
advance), followed by another seek to the previous position. Just use
put_ebml_binary() instead.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
There is a good upper bound for the maximum length of the Colour master
element; it is therefore unnecessary to use a dynamic buffer for it.
A simple buffer on the stack is enough. This commit implements this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The Matroska muxer updates several header elements when the output is
seekable; if unseekable, the buffer containing the contents of the element
is immediately freed after writing. Before this commit, there were three
places doing exactly the same: Checking whether the output is seekable
and calling the function that writes and frees or the function that
just writes the EBML master. This has been unified; adding SeekHead
entries for these elements has been unified, too.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Up until now, SeekEntries were already added before
start_ebml_master_crc32() was even called and before we were actually
sure that we really write the element the SeekHead references: After
all, we might also error out later; and given that the allocations
implicit in dynamic buffers should be checked, end_ebml_master_crc32()
will eventually have to return errors itself, so that it is the right
place to add SeekHead entries.
The earlier behaviour is of course a remnant of the time in which
start_ebml_master_crc32() really did output something, so that the
position before start_ebml_master_crc32() needed to be recorded.
Erroring out later is also not as dangerous as it seems because in
this case no SeekHead will be written (if it happened when writing
the header, the whole muxing process would abort; if it happened
when writing the trailer (when writing chapters not available initially),
writing the trailer would be aborted and no SeekHead containing the
bogus chapter entry would be written).
This commit does not change the way the SeekEntries are added for those
elements that are output preliminarily; this is so because the SeekHead
is written before those elements are finally output and doing it
otherwise would increase the amount of seeks.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The mapping of streams to the various variant streams to be created by
the HLS muxer is roughly as follows: Space and tab separate variant
stream group maps while the entries in each variant stream group map are
separated by ','.
The parsing process of each variant stream group proceeded as follows:
At first the number of occurences of "a:", "v:" and "s:" in each variant
stream group is calculated so that one can can allocate an array of
streams with this number of entries. Then each entry is checked and the
check for stream numbers was deficient: It did check that there is a
number beginning after the ":", but it did not check that the number
extends until the next "," (or until the end).
This means that an invalid variant stream group like v:0_v:1 will not be
rejected; the problem is that the variant stream in this example is
supposed to have two streams associated with it (because it contains two
"v:"), yet only one stream is actually associated with it (because there
is no ',' to start a second stream specifier). This discrepancy led to
segfaults (null pointer dereferencing) in the rest of the code (when the
nonexistent second stream associated to the variant stream was inspected).
Furthermore, this commit also removes an instance of using atoi() whose
behaviour on a range error is undefined.
Fixes ticket #8652.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
fix ticket: 8651
because the init fragment mp4 file name is without base url name,
so just modify it use the full url which splice after init function.
Tested-by: matclayton
Signed-off-by: Steven Liu <liuqi05@kuaishou.com>
This patch adds possibility to use 'periodic-rekey' option with
multi-variant streams to hlsenc muxer. All streams variants
use parameters from the same key_info_file.
There are 2 sets of encryption options that kind of overlaps and add
complexity, so I tried to do the thing without changing too much code.
There is a little duplication of the key_file, key_uri, iv_string, etc
in the VariantStream since we copy it from hls to each variant stream,
but generally all the code remains the same to minimise appearing
of unexpected bugs. Refactoring could be done as a separate patch then as needed.
Signed-off-by: Yaroslav Pogrebnyak <yyyaroslav@gmail.com>
segment duration is using vs duration which compute by frame per second,
that can not fix problem of VFR video stream, so compute the duration
when split the segment, set the segment target duration use
current packet pts minus the prev segment end pts..
Reported-by: Zhao Jun <barryjzhao@tencent.com>
Reviewed-by: Zhao Jun <barryjzhao@tencent.com>
Signed-off-by: Steven Liu <liuqi05@kuaishou.com>
and make it static again.
These functions have been moved from nutenc to aviobuf and internal.h
in f8280ff4c0 in order to use them in a
forthcoming patch in utils.c. Said patch never happened, so this commit
moves them back and makes them static, effectively reverting said
commit as well as f8280ff4c0 (which added
the ff-prefix to these functions).
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
It allows to combine several ffio_free_dyn_buf().
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
NUT uses variable-length integers in order to for length fields.
Therefore the NUT muxer often writes data into a dynamic buffer in order
to get the length of it, then writes the length field using the fewest
amount of bytes needed. To do this, a new dynamic buffer was opened,
used and freed for each element which involves lots of allocations. This
commit changes this: The dynamic buffers are now resetted and reused.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
calculate_checksum in put_packet() is always 1.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Since commit 979b5b8959, reverting the
Matroska ContentCompression is no longer done inside
matroska_parse_frame() (the function that creates AVPackets out of the
parsed data (unless we are dealing with certain codecs that need special
handling)), but instead in matroska_parse_block(). As a consequence,
the data that matroska_parse_frame() receives is no longer always owned
by an AVBuffer; it is owned by an AVBuffer iff no ContentCompression needed
to be reversed; otherwise the data is independently allocated and needs
to be freed on error.
Whether the data is owned by an AVBuffer or not is indicated by a variable
buf of type AVBufferRef *: If it is NULL, the data is independently
allocated, if not it is owned by the underlying AVBuffer (and is used to
avoid copying the data when creating the AVPackets).
Because the allocation of the buffer holding the uncompressed data happens
outside of matroska_parse_frame() (if a ContentCompression needs to be
reversed), the data is passed as uint8_t ** in order to not leave any
dangling pointers behind in matroska_parse_block() should the data need to
be freed: In case of errors, said uint8_t ** would be av_freep()'ed in
case buf indicated the data to be independently allocated.
Yet there is a problem with this: Some codecs (namely WavPack and
ProRes) need special handling: Their packets are only stored in
Matroska in a stripped form to save space and the demuxer reconstructs
full packets. This involved allocating a new, enlarged buffer. And if
an error happens when trying to wrap this new buffer into an AVBuffer,
this buffer needs to be freed; yet instead the given uint8_t ** (holding
the uncompressed, yet still stripped form of the data) would be freed
(av_freep()'ed) which certainly leads to a memleak of the new buffer;
even worse, in case the track does not use ContentCompression the given
uint8_t ** must not be freed as the actual data is owned by an AVBuffer
and the data given to matroska_parse_frame() is not the start of the
actual allocated buffer at all.
Both of these issues are fixed by always freeing the current data in
case it is independently allocated. Furthermore, while it would be
possible to track whether the pointer from matroska_parse_block() needs
to be reset or not, there is no gain in doing so, as the pointer is not
used at all afterwards and the sematics are clear: If the data passed
to matroska_parse_frame() is independently allocated, then ownership
of the data passes to matroska_parse_frame(). So don't pass the data
via uint8_t **.
Fixes Coverity ID 1462661 (the issue as described by Coverity is btw
a false positive: It thinks that this error can be triggered by ProRes
with a size of zero after reconstructing the original packets, but the
reconstructed packets can't have a size of zero).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
This has previously only been checked if the chapters were initially
available, but not if they were only written in the trailer.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Up until now ff_vorbiscomment_write() used the bytestream API to write
VorbisComments. Therefore the caller had to provide a sufficiently large
buffer to write the output.
Yet two of the three callers (namely the FLAC and the Matroska muxer)
actually want the output to be written via an AVIOContext; therefore
they allocated buffers of the right size just for this purpose (i.e.
they get freed immediately afterwards). Only the Ogg muxer actually
wants a buffer. But given that it is easy to wrap a buffer into an
AVIOContext this commit changes ff_vorbiscomment_write() to use an
AVIOContext for its output.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
ff_vorbiscomment_write() used an AVDictionary ** parameter for a
dictionary whose contents ought to be written; yet this can be replaced
by AVDictionary * since commit 042ca05f0fdc5f4d56a3e9b94bc9cd67bca9a4bc;
and this in turn can be replaced by const AVDictionary * to indicate
that the dictionary isn't modified; the latter also applies to
ff_vorbiscomment_length().
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
If a FLAC track uses an unconventional channel layout, the Matroska
muxer adds a WAVEFORMATEXTENSIBLE_CHANNEL_MASK VorbisComment to the
CodecPrivate to preserve this information. And given that FLAC uses
24bit length fields, the muxer checks if the length is more than this
and errors out if it is.
Yet this can never happen, because we create the AVDictionary that is
the source for the VorbisComment. It only contains exactly one entry
that can't grow infinitely large (in fact, the length of the
VorbisComment is <= 4 + 33 + 1 + 18 + strlen(LIBAVFORMAT_IDENT)).
So we can simply assert the size to be < (1 << 24) - 4.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Commit 6fd300ac6c added support for WebM
Chunk livestreaming; in this case, both the header as well as each
Cluster is written to a file of its own, so that even if the AVIOContext
seems seekable, the muxer has to behave as if it were not. Yet one of
the added checks makes no sense: It ensures that no SeekHead is written
preliminarily (and hence no SeekHead is written at all) if the option
for livestreaming is set, although one should write the SeekHead in this
case when writing the Header. E.g. the WebM-DASH specification [1]
never forbids writing a SeekHead and in some instances (that don't apply
here) even requires it (if Cues are written after the Clusters).
[1]: https://sites.google.com/a/webmproject.org/wiki/adaptive-streaming/webm-dash-specification
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Since commit 4aa0665f39, the dynamic
buffer destined for the contents of the current Cluster is no longer
constantly allocated, reallocated and then freed after writing the
content; instead it is reset and reused when closing a Cluster.
Yet the code in mkv_write_trailer() still checked for whether a Cluster
is open by checking whether the pointer to the dynamic buffer is NULL or
not (instead of checking whether the position of the current Cluster is
-1 or not). If a Cluster was not open, an empty Cluster would be output.
One usually does not run into this issue, because unless there are
errors, there are only three possibilities to not have an opened Cluster
at the end of writing a packet:
The first is if one sent an audio packet to the muxer. It might trigger
closing and outputting the old Cluster, but because the muxer caches
audio packets internally, it would not be output immediately and
therefore no new Cluster would be opened.
The second is an audio packet that does not contain data (such packets
are sometimes sent for side-data only, e.g. by the FLAC encoder). The
only difference to the first scenario is that such packets are not
cached.
The third is if one explicitly flushes the muxer by sending a NULL
packet via av_write_frame().
If one also allows for errors, then there is also another possibility:
Caching the audio packet may fail in the first scenario.
If one calls av_write_trailer() after the first scenario, the cached
audio packet will be output when writing the trailer, for which
a Cluster is opened and everything is fine; because flushing the muxer
does currently not output the cached audio packet (if one is cached),
the issue also does not exist if an audio packet has been cached before
flushing. The issue only exists in one of the other scenarios.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Fixes: out of array write
Fixes: Regression since f619e1ec66
Reviewed-by: Lynne <dev@lynne.ee>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Sequence numbers of segments should be unique, if an encoder is using shorter
than 1 second segments and it is restarted, then future segments will be using
already used sequence numbers if initial sequence number is based on the number
of seconds since epoch and not microseconds.
Signed-off-by: Marton Balint <cus@passwd.hu>
Reindentation as well as marking several variables used for demuxing
RealAudio as const to clearly see that they don't change during
demuxing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The Matroska demuxer has three functions for creating packets out of
the data read: One for certain RealAudio codecs (ATRAC3, cook, sipr,
RealAudio 28.8), one for WebVTT (actually, the WebM flavour of it) and
one for all the others. Only the last function supported Matroska's
ContentCompression (e.g. it reversed zlib compression or added the
removed headers to the packets). But in Matroska, all tracks are allowed
to be compressed. This commit adds support for this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Matroska is built around the principle that a reader does not need to
understand everything in a file in order to be able to make use of it;
it just needs to ignore the data it doesn't know about.
Our demuxer typically follows this principle, but there is one important
instance where it does not: A Block belonging to a TrackEntry with no
associated stream is treated as invalid data (i.e. the demuxer will try
to resync to the next level 1 element because it takes this as a sign
that it has lost sync). Given that we do not create streams if we don't
know or don't support the type of the TrackEntry, this impairs this
demuxer's forward compability.
Furthermore, ignoring Blocks belonging to a TrackEntry without
corresponding stream can (in future commits) also be used to ignore
TrackEntries with obviously bogus entries without affecting the other
TrackEntries (by not creating a stream for said TrackEntry).
Finally, given that matroska_find_track_by_num() already emits its own
error message in case there is no TrackEntry with a given TrackNumber,
the error message (with level AV_LOG_INFO) for this can be removed.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
A Block (meaning both a Block in a BlockGroup as well as a SimpleBlock)
must have at least three bytes after the field containing the encoded
TrackNumber. So if there are <= 3 bytes, the Matroska demuxer would
skip this block, believing it to be an empty, but valid Block.
This might discard valid nonempty Blocks, namely if the track uses header
stripping. And certain definitely spec-incompliant Blocks don't raise
errors: Those with two or less bytes left after the encoded TrackNumber
and those with three bytes left, but with flags indicating that the Block
uses lacing as then there has to be further data describing the lacing.
Furthermore, zero-sized packets were still possible because only the
size of the last entry of a lace was checked.
This commit fixes this. All spec-compliant Blocks that contain data
(even if side data only) are now returned to the caller; spec-compliant
Blocks that don't contain anything are not returned.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Some conditions which don't change and which can therefore be checked
in read_header() were instead rechecked upon parsing each block. This
has been changed.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The Matroska demuxer splits every sequence of h Matroska Blocks into
h * w / cfs packets of size cfs; here h (sub_packet_h), w (frame_size)
and cfs (coded_framesize) are parameters from the track's CodecPrivate.
It does this by splitting the Block's data in h/2 pieces of size cfs each
and putting them into a buffer at offset m * 2 * w + n * cfs where
m (range 0..(h/2 - 1)) indicates the index of the current piece in the
current Block and n (range 0..(h - 1)) is the index of the current Block
in the current sequence of Blocks. The data in this buffer is then used
for the output packets.
The problem is that there is currently no check to actually guarantee
that no uninitialized data will be output. One instance where this is
trivially so is if h == 1; another is if cfs * h is so small that the
input pieces do not cover everything that is output. In order to
preclude this, rmdec.c checks for h * cfs == 2 * w and h >= 2. The
former requirement certainly makes much sense, as it means that for
every given m the input pieces (corresponding to the h different values
of n) form a nonoverlapping partition of the two adjacent frames of size w
corresponding to m. But precluding h == 1 is not enough, other odd
values can cause problems, too. That is because the assumption behind
the code is that h frames of size w contain data to be output, although
the real number is h/2 * 2. E.g. for h = 3, cfs = 2 and w = 3 the
current code would output four (== h * w / cfs) packets. although only
data for three (== h/2 * h) packets has been read.
(Notice that if h * cfs == 2 * w, h being even is equivalent to
cfs dividing w; the latter condition also seems very reasonable:
It means that the subframes are a partition of the frames.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
RealAudio 28.8 (like other RealAudio codecs) uses a special demuxing
mode in which the data of the existing Matroska Blocks is not simply
forwarded as-is. Instead data from several Blocks is recombined
together to output several packets. The parameters governing this
process are parsed from the CodecPrivate: Coded framesize (cfs), frame
size (w) and sub_packet_h (h).
During demuxing, h/2 pieces of data of size cfs each are read from every
Matroska (Simple)Block and put at offset m * 2 * w + n * cfs of a buffer
of size h * w, where m ranges from 0 to h/2 - 1 for each Block while n
is initially zero and incremented after a Block has been parsed until it
is h, at which poin the assembled packets are output and n reset.
The highest offset is given by (h/2 - 1) * 2 * w + (h - 1) * cfs + cfs
while the destination buffer's size is given by h * w. For even h, this
leads to a buffer overflow (and potential segfault) if h * cfs > 2 * w;
for odd h, the condition is h * cfs > 3 * w.
This commit adds a check to rule this out.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
RealAudio 28.8 does not need or use sub_packet_size for its demuxing
and this field is therefore commonly set to zero. But since 18ca491b
the Real Audio specific demuxing is no longer applied if sub_packet_size
is zero because the codepath for cook and ATRAC3 divide by it; this made
these files undecodable.
Furthermore, since 569d18aa (merged in 2c8d876d) sub_packet_size being
zero is used as an indicator for invalid data, so that a file containing
such a track was completely skipped.
This commit fixes this by not checking sub_packet_size for RealAudio
28.8 at all.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
They need a special parsing mode and in order to find out whether this
mode is in use, several checks have to be performed. They can all be
combined into one: If the buffer that is only used to assemble their
packets has been allocated, use the RealAudio parsing mode.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Only flavors 0..3 seem to exist. E.g. rmdec.c treats any flavor > 3
as invalid data. Furthermore, we do not know how big the packets to
create ought to be given that for sipr these values are not read from
the bitstream, but from a table.
Furthermore, flavor is only used for sipr, so only check it for sipr;
rmdec.c does the same. (The old check for flavor being < 0 was
always wrong given that flavor is an int that is read via avio_rb16(),
so it has been removed completely.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
This makes decoding far more robust, since OggS, the ogg magic,
can be commonly found randomly in streams, which previously made
the demuxer think there's a new stream or a change in such.
hdsenc already had an explicit function to free all allocations in case
of an error, but it was not marked as deinit function, so that it was
not called automatically when the AVFormatContext for muxing gets freed.
Using an explicit deinit function also makes the code cleaner by
allowing to return immediately without "goto fail".
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Current muxers only use a single bitstream filter, so there is no need to
maintain code which operates on a list of bitstream filters. When multiple
bitstream filters are needed muxers can simply use a list bitstream filter.
If there is a use case in the future when different bitstream filters should be
added at subsequent packets then a new API possibly involving reconfiguring the
list bitstream filter can be added knowing the exact requirements.
Signed-off-by: Marton Balint <cus@passwd.hu>
mux.c was split from utils.c in 55f9037f38
and during this split all headers were simply copied without checking if
they were only needed in the part that stayed in utils.c (or whether
these haeders were needed at all). As a result quite a lot of headers
in mux.c are unnecessary. This commit removes them.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
stdarg.h has been included in 780d7897a9
for ff_url_join(). This header became unnecessary when this function was
moved into a separate file in df9f22d42b.
libavutil/pixdesc.h has been included for av_get_pix_fmt_name() in
603b8bc2a1 and is unused since commit
2fb7501938 that removed the stuff belonging
to FF_API_FORMAT_PARAMETERS. Notice that this file still uses
AV_PIX_FMT_NONE and that therefore the header libavutil/pixfmt.h has
been included (this header is included in pixdesc.h as well as also in
libavutil/internal.h which is also included).
libavutil/time_internal.h has been included for gmtime_r() in commit
e7dd97b5d8cd6ea150446591f37a5946e8ab7cfb; it is unused since commit
b72a7b96f8 which basically moved the code
making use of gmtime_r() to libavutil/dict.c to use in
avpriv_dict_set_timestamp().
audiointerleave.h has been added in c26e58e32c
because of ff_interleave_compare_dts() (at that time the muxing code
was not split from utils.c yet); said function became static in commit
101e1f6ff9, making this header redundant.
metadata.h has been mostly included for what now resides in
libavutil/dict.h. The stuff that now resides in metadata.h has only been
used briefly: From commits ed7694d8cf to
d60a9f52eb.
riff.h has been added in 45da8124a0
because riff.h once contained declarations for (ff_)codec_get_tag().
This was changed in bfe5454cd2.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
support dvcC/dvcC box from spec Dolby Vision Streams Within the
ISO Base MediaFile Format Version 2.1.2
(https://www.dolby.com/in/en/technologies/dolby-vision/dolby-vision\
-bitstreams-within-the-iso-base-media-file-format-v2.1.2.pdf)
export the DOVI information to sidedata.
Signed-off-by: vacingfang <vacingfang@tencent.com>
support DOVI Video Stream Descriptor from Dolby Vision Streams
Within the MPEG-2 Transport Stream Format V1.2
From the spec: https://www.dolby.com/us/en/technologies/\
dolby-vision/dolby-vision-bitstreams-in-mpeg-2-transport-\
stream-multiplex-v1.2.pdf.
export the DOVI information with sidedata.
Signed-off-by: vacingfang <vacingfang@tencent.com>
Reindentation, removal of { } if they contain only one statement
and moving the return statement to a line of its own in situations
like "if (ret < 0) return ret;". Moreover, several overlong lines
were made shorter and a camelCase variable received a name in line
with our naming conventions.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Up until now, the Matroska muxer would mark a track as default if it had
the disposition AV_DISPOSITION_DEFAULT or if there was no track with
AV_DISPOSITION_DEFAULT set; in the latter case even more than one track
of a kind (audio, video, subtitles) was marked as default which is not
sensible.
This commit changes the logic used to mark tracks as default. There are
now three modes for this:
a) In the "infer" mode the first track of every type (audio, video,
subtitles) with default disposition set will be marked as default; if
there is no such track (for a given type), then the first track of this
type (if existing) will be marked as default. This behaviour is inspired
by mkvmerge. It ensures that the default flags will be set in a sensible
way even if the input comes from containers that lack the concept of
default flags. This mode is the default mode.
b) The "infer_no_subs" mode is similar to the "infer" mode; the
difference is that if no subtitle track with default disposition exists,
no subtitle track will be marked as default at all.
c) The "passthrough" mode: Here the track will be marked as default if
and only the corresponding input stream had disposition default.
This fixes ticket #8173 (the passthrough mode is ideal for this) as
well as ticket #8416 (the "infer_no_subs" mode leads to the desired
output).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
At the end of encoding, the FLAC encoder sends a packet whose side data
contains updated extradata (e.g. a correct md5 checksum). The Matroska
muxer uses this to update the CodecPrivate.
In doing so, the stream's codecpar was copied. But given that writing
a FLAC CodecPrivate does not modify the used AVCodecParameters at all,
there is no need to do so and this commit changes this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Several EBML Master elements for which a good upper bound of the final
length was available were nevertheless written without giving an
upper bound of the final length to start_ebml_master(), so that their
length fields were eight bytes long. This has been changed.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The Matroska muxer does not write every stream as a Matroska track;
some streams are written as AttachedFile. But should no stream be
written as a Matroska track, the Matroska muxer would nevertheless
write a Tracks element without a TrackEntry. This is against the spec.
This commit changes this and only writes a Tracks if there is a Matroska
track.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
As WebM doesn't support Attachments, the Matroska muxer drops them when
in WebM mode. This happened silently until this commit which adds a
warning for this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
In order to determine whether the current Cluster needs to be closed
because of the limits on clustersize and clustertime,
mkv_write_packet() would first get the size of the current Cluster by
applying avio_tell() on the dynamic buffer holding the current Cluster.
It did this without checking whether there is a dynamic buffer for
writing Clusters open right now.
In this case (which happens when writing the first packet)
avio_tell() returned AVERROR(EINVAL); yet it is not good to rely on
avio_tell() (or actually, avio_seek()) to handle the situation
gracefully.
Fixing this is easy: Only check whether a Cluster needs to be closed
if a Cluster is in fact open.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
When creating DASH streams, the TrackNumber is externally prescribed
and not derived from the number of streams in the AVFormatContext, so
if the number of tracks for a file using an explicit TrackNumber was
more than one, the resulting file would be broken (it would be impossible
to tell to which track a Block belongs if different tracks share the
same TrackNumber). So disallow this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The Matroska muxer currently only adds CuePoints in three cases:
a) For video keyframes. b) For the first audio frame in a new Cluster if
in DASH-mode. c) For subtitles. This means that ordinary Matroska audio
files won't have any Cues which impedes seeking.
This commit changes this. For every track in a file without video track
it is checked and tracked whether a Cue entry has already been added
for said track for the current Cluster. This is used to add a Cue entry
for each first packet of each track in each Cluster.
Implements #3149.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>