Since c6a63e1109, the parameter sets
modified as content of PPS units were references shared with the
CodedBitstreamH264Context, so modifying them alters the parsing process
of future access units which meant that frames often got discarded
because invalid values were parsed. This patch makes h264_redundant_pps
compatible with the reality of reference-counted parameter sets.
Fixes#7807.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: Mark Thompson <sw@jkqxz.net>
Use the unit type table to determine what we need to do to clone the
internals of the unit content when making copies for refcounting or
writeability. (This will still fail for units with complex content
if they do not have a defined clone function.)
Setup and naming from a patch by Andreas Rheinhardt
<andreas.rheinhardt@gmail.com>, but with the implementation changed
to use the unit type information if possible rather than requiring a
codec-specific function.
Unit types are split into three categories, depending on how their
content is managed:
* POD structure - these require no special treatment.
* Structure containing references to refcounted buffers - these can use
a common free function when the offsets of all the internal references
are known.
* More complex structures - these still require ad-hoc treatment.
For each codec we can then maintain a table of descriptors for each set of
equivalent unit types, defining the mechanism needed to allocate/free that
unit content. This is not required to be used immediately - a new alloc
function supports this, but does not replace the old one which works without
referring to these tables.
Fixes memleaks with some encoders that don't unref the packet before
returning.
This is consistent with the behavior of AVCodec.encode()
implementations in encode_simple_internal().
Found-by: mkver
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: James Almer <jamrial@gmail.com>
The lengths of the VLC codes are implicitly contained in the VLC tables
itself; apart from that they are not used lateron. So it is unnecessary
to store them and the very same array can be reused to parse the Huffman
table for the next plane.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The code already checks that exactly the expected amount of entries are
read and set. Ergo it is unnecessary to zero them at the beginning.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Up until now, there were three comparison functions depending upon
bitness. But they all are actually the same, namely a lexical ordering:
entry a > entry b iff a.len > b.len or a.len == b.len and a.sym < b.sym.
So they can be easily unified.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
When parsing Huffman tables, an array of HuffEntries (a struct
containing a code's bitlength, its bits and its symbol) is used as
intermediate tables in order to sort the entries (the order depends on
both the length of the entries as well as on their symbols). After sorting
them, the symbol and len components are copied into other arrays (the
HuffEntries' code has never been set or used, despite using quite a lot
of stack space) and the codes are generated. Afterwards, the VLC is
created.
Yet ff_init_vlc_sparse() can handle non-continuous arrays as input;
there is no need to copy the entries at all. This commit implements
this.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
When the MagicYUV decoder builds Huffman tables from an array of code
lengths, it proceeds as follows: First it copies the entries of the
array of lengths into an array of HuffEntries (a struct which contains
a length and a symbol field); it also sets the symbol field in
descending order from nb_elem - 1 to 0, where nb_elem is the common number
of elements of the length and HuffEntry arrays. Then the HuffEntry array
is sorted lexicographically: a > b iff a.len > b.len or a.len == b.len and
a.sym > b.sym. Afterwards the symbols of the so sorted array are
inverted again (i.e. each symbol sym is replaced by nb_elem - sym).
Yet inverting can easily be avoided altogether: Just modify the order so
that smaller symbols correspond to bigger HuffEntries. This leads to the
same permutation as the current code does and given that the two
inversions just cancel each other out, the result is the same.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
av_new_packet() already sets the size. And if the packet is not
allocated by av_new_packet() (which seems to be impossible atm), both
pkt->size as well as size are 0, so setting it again is unnecessary in
this scenario, too.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The comment referred to the INIT_VLC_USE_STATIC flag which has been
removed in 2009 in 595324e143b57a52e2329eb47b84395c70f93087; the
function it referred to was removed even earlier in commit
83422c1940 in 2008.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
In ticket #8754 there is discourse surrounding the error
message which is printed upon a mismatched aspect ratio in
derived encodings. This should make it clearer to the user
as to the issues which they are experiencing.
Reviewed-by: "Jeyapal, Karthick" <kjeyapal@akamai.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The V4L2 driver does not actually have an associated DRM device at all, so
users work around the requirement by giving libva an unrelated display-only
device instead (which is fine, because it doesn't actually do anything with
that device). This was broken by bc9b6358fb
forcing a render node, because the display-only device did not have an
associated render node to use. Fix that by just passing through the
original non-render DRM fd if we can't find a render node.
Reported-by: Paul Kocialkowski <paul.kocialkowski@bootlin.com>
Tested-by: Paul Kocialkowski <paul.kocialkowski@bootlin.com>
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
This avoids keeping potentially dangling pointers in the context,
beautifies the code (by replacing "&ri->gb" by gb for every access to
the GetByteContext) and also highlights the GetByteContext's short-lived
nature better.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The implementation of the tag tree did not
set the correct reset value for the encoder.
This lead to inefficent tag tree being encoded.
This patch fixes the implementation of the
ff_tag_tree_zero() function.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This patch allows setting a compression ratio and to
set multiple layers. The user has to input a compression
ratio for each layer.
The per layer compression ration can be set as follows:
-layer_rates "r1,r2,...rn"
for to create 'n' layers.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The implementation of tag tree encoding was incorrect.
However, this error was not visible as the current j2k
encoder encodes only 1 layer.
This patch fixes tag tree coding for JPEG2000 such tag
tree coding would work for multi layer encoding.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This patch makes the tag_tree_zero() and tag_tree_size()
functions non static and callable from other files.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>