The old code was not properly handling a bunch of edge-cases with
streams terminating earlier and also did not properly report back EOF
to the first input.
This fixes at least one case where the filter could stop doing
anything:
ffmpeg -f lavfi -i "color=blue:d=10" -f lavfi -i "color=aqua:d=0" -filter_complex "[0][1]xfade=duration=2:offset=0:transition=wiperight" -f null -
Wrapped frames contain pointers so they need specific code to
noise them, the generic code would lead to segfaults
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
For now, there's not much value in this since Clang don't support
enabling the dotprod or i8mm features with either .arch_extension
or .arch (it has to be enabled by the base arch flags passed to
the compiler). But it may be supported in the future.
Signed-off-by: Martin Storsjö <martin@martin.st>
These are available since ARMv8.4-a and ARMv8.6-a respectively,
but can also be available optionally since ARMv8.2-a.
Check if ".arch armv8.2-a" and ".arch_extension {dotprod,i8mm}" are
supported, and check if the instructions can be assembled.
Current clang versions fail to support the dotprod and i8mm
features in the .arch_extension directive, but do support them
if enabled with -march=armv8.4-a on the command line. (Curiously,
lowering the arch level with ".arch armv8.2-a" doesn't make the
extensions unavailable if they were enabled with -march; if that
changes, Clang should also learn to support these extensions via
.arch_extension for them to remain usable here.)
Signed-off-by: Martin Storsjö <martin@martin.st>
Animated JPEG XL files requires a separate demuxer than image2, because
the timebase information is set by the demuxer. Should the timebase of
an animated JPEG XL file be incompatible with the timebase set by the
image2pipe demuxer (usually 1/25 unless set otherwise), rescaling will
fail. Adding a separate demuxer for animated JPEG XL files allows the
timebase to be set correctly.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
Migrate the libjxl decoder wrapper from the decode_frame method to the
receive_frame method, which allows sending more than one frame from a
single packet. This allows the libjxl decoder to decode JPEG XL files
that are animated, and emit every frame of the animation. Now, clients
that feed the libjxl decoder with an animated JPEG XL file will be able
to receieve the full animation.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
Currently of_output_packet() reuses the input packet, which requires its
callers to submit blank packets even on EOF, which makes the code more
complex.
It is set by the muxing code, which will not be synchronized with
encoding code after upcoming threading changes. Use an encoder-private
variable instead.
Packets submitted to the muxer now have their timebase attached to them,
so the muxer can do conversion to muxing timebase and avoid exposing it
to callers.
There is no reason to postpone it until opening the encoder. Also, abort
when the input stream is unknown, rather than disregard an explicit
request from the user.
The code will currently add a small offset to avoid exact midpoints, but
this can cause inexact results when a float timestamp is exactly
representable as an integer.
Fixes off-by-one in the first frame duration in multiple FATE tests.
Fixes: signed integer overflow: 9079256848778919936 - -288230376151711746 cannot be represented in type 'long'
Fixes: 58248/clusterfuzz-testcase-minimized-ffmpeg_dem_OGG_fuzzer-6326851353313280
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: signed integer overflow: -182838 * 32768 cannot be represented in type 'int'
Fixes: 58179/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_RKA_fuzzer-5333265899978752
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: left shift of 538976288 by 13 places cannot be represented in type 'int'
Fixes: 56148/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_RKA_fuzzer-6257370708967424
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
In certain use cases, controlling the maximum frame size is critical. An
example is when transmitting AAC packets over Bluetooth A2DP.
While the spec allows the packets to be fragmented (but UNRECOMMENDED),
in practice most headsets do not recognize nor reassemble such packets.
In this patch, we allow setting `bit_rate_tolerance` to 0 to indicate
that the specified bit rate should be treated as an upper bound up to
frame level.
Signed-off-by: Jeremy Wu <jrwu@chromium.org>
Today, cuda is not able to import multiplane images, and cuda requires
images to be imported whether you trying to import to cuda or export
from cuda (in the later case, the image is imported and then copied
into on the cuda side). So any interop between cuda and vulkan requires
that multiplane be disabled.
The existing option for this is not sufficient, because when deriving
devices it is not possible to specify any options.
And, it is necessary to derive the Vulkan device, because any pipeline
that involves uploading from cuda to vulkan and then back to cuda must
use the same cuda context on both sides, and the only way to propagate
the cuda context all the way through is to derive the device at each
stage.
ie:
-vf hwupload=derive_device=vulkan,<filters>,hwupload=derive_device=cuda
Added support for more VideoToolbox encoder options:
- qmin and qmax options are now used
- max_slice_bytes: Max number of bytes per H.264 slice
- max_ref_frames: Limit the number of reference frames
- Disable open GOP when the cgop flag is set
- power_efficient: Enable power-efficient mode
Signed-off-by: Rick Kern <kernrj@gmail.com>