The new logic should be easier to follow.
It also uses ff_inlink_consume_frame() for all simple passthrough operations
making custom get_audio_buffer callback unnecessary.
Fate changes are because the new logic does not repacketize input audio up
until the crossfade. Content is the same.
Signed-off-by: Marton Balint <cus@passwd.hu>
Prior to this patch, kth_order_egk_decode could read arbitrarily
large values which then overflowed and caused various issues.
Patch fixes this by making kth_order_egk_decode falliable,
requiring the caller to specify an upper bound and returning an
error if the read value would exceed that bound.
This patch resolves the same issue as
eb52251c0a, but I think this is the proper
fix as it also addresses issues with syntax elements besides
ff_vvc_num_signalled_palette_entries.
Signed-off-by: Frank Plowman <post@frankplowman.com>
The end_ prefix is confusing and may have contributed the mixup
fixed in the previous commit
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: reading the duration from before the start of the allocated buffer.
Regression since: 36ec9217e6
Fixes: BIGSLEEP-433513232/test
Found-by: Google Big Sleep
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Also ensure that the dst buffers are not too big
(they had the right size for >8 bit depths and were therefore
too big for eight bit, letting potential buffer overflows
in the eight bit version go undetected).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The only offical profile i could find (ALS simple profile) has a max of 15 while the bitstream allows 1023
which is very slow.
We do have a fate sample with 1023
Fixes: Timeout
Fixes: 429645375/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_ALS_fuzzer-5377900448907264
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This linear search has a complexity of O(n). When ffmpeg attempts to parse a playlist containing approximately 100,000 segments, it effectively causes a hang for several minutes.
This patch limits the allowed size for duplicate searches to a reasonable value. Now it takes between 0.5 and a few seconds (tested on different devices) instead of several minutes.
Signed-off-by: Artem Smorodin <artem.smorodin@dacast.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
HEVC fmp4 HLS video produced by ffmpeg is currently unplayable on Apple
software (Safari, QuickTime, AVFoundation).
This is caused by an empty sdtp atom being erroneously written to the
fmp4 init segment. The `has_disposable` flag can be set for a track
with B-frames, but the init segment contains no actual frames
(track->entry == 0). Writing an sdtp atom in this case is incorrect
and causes Apple's parsers to reject the file.
This patch fixes the issue by ensuring the sdtp atom is only written
if track->entry is non-zero.
A similar patch was proposed in November 2023 by Jay Zhang,
but it was never merged.
Link: https://lists.ffmpeg.org/pipermail/ffmpeg-devel/2023-November/317173.html
Co-authored-by: Jay Zhang <wangyoucao577@gmail.com>
Signed-off-by: David McElroy <david@mcelroy.online>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This gives vastly improved blending results than when blending directly in
the desired output colorspace. Overridable by the existing "disable_linear"
option.
This is functionally similar to combining multiple "libplacebo" filters,
but does not rely on the existence of a Vulkan filter link, so it can be used
without performance penalty in all circumstances. It's also enabled by
default, without requiring special action from the user.
The previous formula was introduced without justification in 6e713841e8,
and the only thing Paul had to say about it over IRC was that it was copied
from an unspecified source on the internet.
I decided to do some testing and came to the conclusion that this term not
only produces "illegal" files, but also lowers PSNR score, over the naive
implementation without this extra term.
Here are the results of a round-trip test, using allrgb/allyuv (respectively)
as the input, and fade=alpha=yes:n=256 to cycle through every possible alpha
value, comparing the round-trip output against the input:
Before patch:
PSNR r:26.677431 g:26.677431 b:26.677431 a:inf average:27.926818 min:6.012093 max:55.400791
PSNR y:26.677431 u:21.101981 v:21.101981 a:inf average:23.548981 min:9.013835 max:53.182303 (full)
PSNR y:27.348055 u:21.101981 v:21.101981 a:inf average:23.625238 min:9.554991 max:45.652221 (limited)
After patch:
PSNR r:27.321996 g:27.321996 b:27.321996 a:inf average:28.571384 min:6.012093 max:52.424553
PSNR y:27.321996 u:23.187879 v:23.187879 a:inf average:25.431773 min:9.013835 max:50.199232 (full)
PSNR y:27.868544 u:23.187879 v:23.187879 a:inf average:25.515660 min:9.554991 max:45.078298 (limited)
It's worth pointing out that previous version sometimes artificially inflates
PSNR by producing values that are too high (i.e. RGB > A), such as for the
input pair (R = 255, A = 2) which should give R = 2, but actually gives R = 3
under the old logic.
As a second evaluation without this shortcoming, here is a comparison against
the reference value computed with a floating point format:
Before patch:
PSNR r:53.600599 g:53.957833 b:53.540948 a:inf average:54.945316 min:50.508901 max:inf (premul only)
PSNR r:30.734183 g:30.734183 b:30.734183 a:inf average:31.983570 min:12.058264 max:inf (round-trip)
After patch:
PSNR r:61.751104 g:65.239091 b:61.339191 a:inf average:63.710714 min:55.441130 max:inf (premul only)
PSNR r:32.611851 g:32.611851 b:32.611851 a:inf average:33.861238 min:12.058264 max:inf (round-trip)
Instead of scanning backwards for the end of RPU payload, parse it and
report if we didn't land at the terminator byte.
Current expectation was that we can have additional zero bytes after RPU
payload, which were skipped to find playload end. That's not always the
case. So loosen this requirement.
This fixes files where there is additional non-zeroed padding after the
end of the RPU in NALU.
Signed-off-by: Kacper Michajłow <kasper93@gmail.com>
Fixes: signed integer overflow: 9223372036854737920 + 1649410 cannot be
represented in type 'int64_t'
Fixes OSS-Fuzz: 410100610
Signed-off-by: Kacper Michajłow <kasper93@gmail.com>
The asumption is that DCE will remove references to those functions.
However some compilers with certain instrumentation enabled doesn't DCE
those at all, resulting in linking failure. Tested with cl.exe -RTCu -RTCs.
Signed-off-by: Kacper Michajłow <kasper93@gmail.com>
The asumption is that DCE will remove references to those functions.
However some compilers with certain instrumentation enabled doesn't DCE
those at all, resulting in linking failure. Tested with cl.exe -RTCu -RTCs.
Signed-off-by: Kacper Michajłow <kasper93@gmail.com>
The asumption is that DCE will remove references to those functions.
However some compilers with certain instrumentation enabled doesn't DCE
those at all, resulting in linking failure. Tested with cl.exe -RTCu -RTCs.
Signed-off-by: Kacper Michajłow <kasper93@gmail.com>
When codec->write_sequence_header is not defined, bit_len was undefined,
and while data bufer was zeroed we could just overread it. Do nothing
when we don't have anything to write.
Signed-off-by: Kacper Michajłow <kasper93@gmail.com>
If a frame size is absolutely massive, this can spin the parser as it
attempts to decode a permuted TOC. We add a sanity check here for eight
times the size of the image for an internal frame to prevent malicious
bitstreams from slowing the parser down to a crawl.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
Reported-by: Kacper Michajłow <kasper93@gmail.com>