This disables everything that was deprecated at least 18 months ago.
Readjust the minimum API version as needed, postponing any
API-incompatible changes until the next bump.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Having a mismatch between the number of channels in the stream and those
in the channel map will lead to a segfault or worse.
Bug-Id: 1016
CC: libav-stable@libav.org
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
This supports retrieving the device from a provided hw_frames_ctx, and
automatically creating a hw_frames_ctx if hw_device_ctx is set.
The old API is not deprecated yet. The user can still use
av_vdpau_bind_context() (with or without setting hw_frames_ctx), or use
the API before that by allocating and setting hwaccel_context manually.
This "reuses" the flags introduced for the av_vdpau_bind_context() API
function, and makes them available to all hwaccels. This does not affect
the current vdpau API, as av_vdpau_bind_context() should obviously
override the AVCodecContext.hwaccel_flags flags for the sake of
compatibility.
Align the second/third operands as they usually are.
Due to the wildly varying sizes of the written out operands
in aarch64 assembly, the column alignment is usually not as clear
as in arm assembly.
Signed-off-by: Martin Storsjö <martin@martin.st>
Also do some small cosmetic changes: Drop pointless _MMX suffix from ABSD2
macro name, drop pointless check for MMX support, we always assume MMX is
available in our SIMD code, fix spelling.
Section 9.2.3.2 of the spec implies that run_before must not be larger
than zeros_left.
Fixes invalid reads with corrupted files.
CC: libav-stable@libav.org
Bug-Id: 1000
Found-By: Kamil Frankowicz
The code does some nontrivial jumping around in the buffer, so it is
safer to use a checked API rather than do everything manually.
Fixes a bug in nalff parsing, where the length field is currently not
counted in the buffer size check, resulting in possible overreads with
invalid files.
CC: libav-stable@libav.org
Bug-Id: 1002
Found-By: Kamil Frankowicz
In the half/quarter cases where we don't use the min_eob array, defer
loading the pointer until we know it will be needed.
Signed-off-by: Martin Storsjö <martin@martin.st>
This reduces the number of lines and reduces the duplication.
Also simplify the eob check for the half case.
If we are in the half case, we know we at least will need to do the
first three slices, we only need to check eob for the fourth one,
so we can hardcode the value to check against instead of loading
from the min_eob array.
Since at most one slice can be skipped in the first pass, we can
unroll the loop for filling zeros completely, as it was done for
the quarter case before.
This allows skipping loading the min_eob pointer when using the
quarter/half cases.
Signed-off-by: Martin Storsjö <martin@martin.st>
Decodes YUV 4:2:2 10-bit and RGB 12-bit files.
Older files with more subbands, skips, Bayer, alpha not supported.
Further fixes and refactorings by Anton Khirnov <anton@khirnov.net>,
Diego Biurrun <diego@biurrun.de>, Vittorio Giovara <vittorio.giovara@gmail.com>
Signed-off-by: Diego Biurrun <diego@biurrun.de>
Make it clear that there is no timing-dependent behavior. In particular,
there is no state in which both input and output are denied, and where
you have to wait for a while yourself to make progress (apparently some
hardware decoders like to do this).
Avoid wording that makes references to time. It shouldn't be mistaken
for some kind of asynchronous API (like POSIX read() can return EAGAIN
if there is no new input yet). It's a state machine, so try to use
appropriate terms.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
The constants used in the decoder used floating point precision,
and this caused different values to be generated on different
architectures. Additionally on big endian machines, the fate test
would output bytes in native order, which is different from the one
hardcoded in the test.
So, eradicate floating point numbers and use fixed point (32.32)
arithmetics everywhere, replacing constants with precomputed integer
values, and force the pixel format output to be the same in the fate
test.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
qmin and qmax are not necessary for nvenc vbr.
Also fix for using 2 pass vbr mode for slow preset through ctx->flag NVENC_TWO_PASSES.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
The map is a sparse array and does not need a empty element to terminate
it.
The empty element is stored after the last one inserted in the list,
overwriting whichever element was next with zeros.
Bug-Id: 1029
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Currently it incorrectly compares bits with bytes.
Also, move the check right before where it's relevant, so that the
correct number of remaining bits is used.
CC: libav-stable@libav.org
This matches the order they are in the 16 bpp version.
There they are in this order, to make sure we access them in the
same order they are declared, easing loading only half of the
coefficients at a time.
This makes the 8 bpp version match the 16 bpp version better.
Signed-off-by: Martin Storsjö <martin@martin.st>
This matches the order they are in the 16 bpp version.
There they are in this order, to make sure we access them in the
same order they are declared, easing loading only half of the
coefficients at a time.
This makes the 8 bpp version match the 16 bpp version better.
Signed-off-by: Martin Storsjö <martin@martin.st>
All elements are used pairwise, except for the first one.
Previously, the 16th element was unused. Move the unused element
to the second slot, to make the later element pairs not split
across registers.
This simplifies loading only parts of the coefficients,
reducing the difference to the 16 bpp version.
Signed-off-by: Martin Storsjö <martin@martin.st>
All elements are used pairwise, except for the first one.
Previously, the 16th element was unused. Move the unused element
to the second slot, to make the later element pairs not split
across registers.
This simplifies loading only parts of the coefficients,
reducing the difference to the 16 bpp version.
Signed-off-by: Martin Storsjö <martin@martin.st>
The idct32x32 function actually pushed d8-d15 onto the stack even
though it didn't clobber them; there are plenty of registers that
can be used to allow keeping all the idct coefficients in registers
without having to reload different subsets of them at different
stages in the transform.
After this, we still can skip pushing d12-d15.
Before:
vp9_inv_dct_dct_32x32_sub32_add_neon: 8128.3
After:
vp9_inv_dct_dct_32x32_sub32_add_neon: 8053.3
Signed-off-by: Martin Storsjö <martin@martin.st>
The idct32x32 function actually pushed q4-q7 onto the stack even
though it didn't clobber them; there are plenty of registers that
can be used to allow keeping all the idct coefficients in registers
without having to reload different subsets of them at different
stages in the transform.
Since the idct16 core transform avoids clobbering q4-q7 (but clobbers
q2-q3 instead, to avoid needing to back up and restore q4-q7 at all
in the idct16 function), and the lanewise vmul needs a register in
the q0-q3 range, we move the stored coefficients from q2-q3 into q4-q5
while doing idct16.
While keeping these coefficients in registers, we still can skip pushing
q7.
Before: Cortex A7 A8 A9 A53
vp9_inv_dct_dct_32x32_sub32_add_neon: 18553.8 17182.7 14303.3 12089.7
After:
vp9_inv_dct_dct_32x32_sub32_add_neon: 18470.3 16717.7 14173.6 11860.8
Signed-off-by: Martin Storsjö <martin@martin.st>