This function is so extremely simple that it is preferable to make it
inline rather than deal with all the complications arising from it being
an exported symbol.
Keep avpriv_align_put_bits() around until the next major bump to
preserve ABI compatibility.
flush_put_bits() already fills the bitstream with zeroes, so it is
unnecessary to align the bitstream before.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Wavelet types with large amounts of overreading/writing like 9_7 would
write into the padding at high wavelet depths, which would remain and be
read by the next frame's transform and quickly cause artifacts to appear
for subsequent frames.
This fix affects all frames encoded with a non-power-of-two width, with
the artifacts varying between non-observable to very noticeable,
depending on encoder settings, so reencoding is advisable.
On Windows machines, the UL suffix still means 32 bits.
The only parts that need 64 bits are (1ULL << (m + 32)) and
(t*qf + qf). Hence, use the proper ULL suffix for the former
and just increase the type of the qf constant for the latter.
No overflows can happen as long as these are done in 64 bits and
the quantization table doesn't change.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
This commit replaces the huge and impractical LUT which converted coeffs
and a quantizer to bits to encode and instead uses a standard multiplication
and a shift to replace the division and then codes the values using the
regular golomb coding functions.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
The rounding caused by the size scaler wasn't compensated for and the
slice sizes grew beyond what is allowed per frame.
Signed-off-by: Rostislav Pehlivanov <rpehlivanov@obe.tv>
Since non-Haar wavelets need to look into pixels outside the frame, we
need to pad the buffer. The old factor of two seemed to be a workaround
that fact and only padded to the left and bottom. This correctly pads
by the slice size and as such reduces memory usage and potential
exploits.
Reported by Liu Bingchang.
Ideally, there should be no temporary buffer but the encoder is designed
to deinterleave the coefficients into the classical wavelet structure
with the lower frequency values in the top left corner.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
Given how incredibly limited the official specifications are (limiting all use
to only the most common broadcasting formats), permit all supported inputs
by default. This makes the encoder more useful.
Prevents having to have random magic values in the decoder and a
separate macro in the encoder.
Signed-off-by: Rostislav Pehlivanov <rpehlivanov@obe.tv>
The slice prefix is 0 in the reference encoder and the decoder ignores it.
Writing 0 there seems like the best temporary solution.
The padding could have contained uninitialized data, but reference VC2
encoders put 0xFF there, hence the memset value.
Overall this allows producing bistreams with no random data for use by fate.
Until now, for formats which were in the spec but not in the encoder's
list of supported formats required the -strict -1 flag. This enables
support for all video formats which are specified, all the way from
QSIF525 to 8K.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
In some cases this caused the slice size rounding to generate invalid
slice sizes and overwrite some slices.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
This was a leftover from before the slices were encoded in parallel.
Since the put_bits context is initialized per slice aligning it
aferwards is pointless.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
This commit solves most of the crashes and issues with the encoder and
the bitrate setting. Now the encoder will always allocate the absolute
lowest amount of memory regardless of what the bitrate has been set to.
Therefore if a user inputs a very low bitrate the encoder will use the
maximum possible quantization (basically zero out all coefficients),
allocate a packet and encode it. There is no coupling between the
bitrate and the allocation size and so no crashes because the buffer
isn't large enough.
The maximum quantizer was raised to the size of the table now to both
keep the overshoot at ridiculous bitrates low and to improve quality
with higher bit depths (since the coefficients grow larger per transform
quantizing them to the same relative level requires larger quantization
indices).
Since the quantization index start follows the previous quantization
index for that slice, the quantization step was reduced to a static 1
to improve performance. Previously with quant/5 the step was usually
set to 0 upon start (and was later clipped to 1), that isn't a big change.
As the step size increases so does the amount of bits leftover and so
the redistribution algorithm has to iterate more and thus waste more
time.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
This was a regression introduced by commit e7345abe05 which
enabled full use of the allocated packet but due to the overhead of
using field coding the buffer was too small and triggered warnings and
crashes.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
The fact that now all quantization indices costs are cached justifies
storing 20 more integers in a structure already allocated on heap.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
This commit redistributes the leftover bytes amongst the top 150 slices
in terms of size (in the hopes that they'll be the ones pretty bitrate
starved).
A more perceptual method would probably need to cut bits off from slices
which don't need much, but that'll be implemented later.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
Previously a global average was used. Using the previous quantizer
resulted in a fairly significant speedup as slice size selection settled
down quicker.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
16 bits were definitely not enough and caused artifacts to appear on
images at barely compressed images.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
This commit moves the minimum bits per slice calculations outside of the
rate control function as it is identical for every slice.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
Since coefficients differ only in the last bit when writing to the
bitstream it was possible to remove the sign from the tables, thus
halving them. Also now all quantization is done in the unsigned domain
as the sign is completely separate, which gets rid of the need to do
quantization on 32 bit signed integers.
Overall, this slightly speeds up the encoder depending on the machine.
The commit still generates bit-identical files as before the commit.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
The reference encoder limits it to 64, but testing revealed that there
is absolutely no difference for indices above 50 in amount of zeroed
coefficients.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
This commit adds support for the (simple, allowed in the spec, but
inferior in quality) Haar wavelet transforms.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
Similar to how the AAC encoder does it.
0 means the video's been compressed losslessly/almost losslessly
thoughout. Generally, the higher, the worse.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
This commit adds a new encoder capable of creating BBC/SMPTE Dirac/VC-2 HQ
profile files.
Dirac is a wavelet based codec created by the BBC a little more than 10
years ago. Since then, wavelets have mostly gone out of style as they
did not provide adequate encoding gains at lower bitrates. Dirac was a
fully featured video codec equipped with perceptual masking, support for
most popular pixel formats, interlacing, overlapped-block motion
compensation, and other features. It found new life after being stripped
of various features and standardized as the VC-2 codec by the SMPTE with
an extra profile, the HQ profile that this encoder supports, added.
The HQ profile was based off of the Low-Delay profile previously
existing in Dirac. The profile forbids DC prediction and arithmetic
coding to focus on high performance and low delay at higher bitrates.
The standard bitrates for this profile vary but generally 1:4
compression is expected (~525 Mbps vs the 2200 Mbps for uncompressed
1080p50). The codec only supports I-frames, hence the high bitrates.
The structure of this encoder is simple: do a DWT transform on the
entire image, split it into multiple slices (specified by the user) and
encode them in parallel. All of the slices are of the same size, making
rate control and threading very trivial. Although only in C, this encoder
is capable of 30 frames per second on an 4 core 8 threads Ivy Bridge.
A lookup table is used to encode most of the coefficients.
No code was used from the GSoC encoder from 2007 except for the 2
transform functions in diracenc_transforms.c. All other code was written
from scratch.
This encoder outperforms any other encoders in quality, usability and in
features. Other existing implementations do not support 4 level
transforms or 64x64 blocks (slices), which greatly increase compression.
As previously said, the codec is meant for broadcasting, hence support
for non-broadcasting image widths, heights, bit depths, aspect ratios,
etc. are limited by the "level". Although this codec supports a few
chroma subsamplings (420, 422, 444), signalling those is generally
outside the specifications of the level used (3) and the reference
decoder will outright refuse to read any image with such a flag
signalled (it only supports 1920x1080 yuv422p10). However, most
implementations will happily read files with alternate dimensions,
framerates and formats signalled.
Therefore, in order to encode files other than 1080p50 yuv422p10le, you
need to provide an "-strict -2" argument to the command line. The FFmpeg
decoder will happily read any files made with non-standard parameters,
dimensions and subsamplings, and so will other implementations. IMO this
should be "-strict -1", but I'll leave that up for discussion.
There are still plenty of stuff to implement, for instance 5 more
wavelet transforms are still in the specs and supported by the decoder.
The encoder can be lossless, given a high enough bitrate.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>