Videos of "StarWars - Making Magic" consist of 640x480 codec3 frames
which establish a background, and a 320x240 codec48 video put on top
at random left/top offsets.
To support this, a new default buffer "fbuf", which holds the final
image to be presented, is added, since codec37/47/48 need their 2/3 buffers
to be private to themselves. The decoded result is then copied to the fbuf,
honoring the left/top offsets if required.
Signed-off-by: Manuel Lauss <manuel.lauss@gmail.com>
Change the size detection a bit to recognize common video sizes,
as the FOBJ codecs>=37 cannot always be trusted, since they can
be embedded in a larger frame.
Signed-off-by: Manuel Lauss <manuel.lauss@gmail.com>
Some videos of "StarWars - Making Magic" have this subcompression
type: data just consists of the 16 byte codec48 header; the DOS player
and the c48 decoder in the "Mysteries of the Sith" game engine ignore it.
Signed-off-by: Manuel Lauss <manuel.lauss@gmail.com>
liboapv will seemingly encode correct 4:4:4 output, but report profile_idc 33, which
is specifically the profile value for 4:2:2 10bit.
Signed-off-by: James Almer <jamrial@gmail.com>
In the call to vkGetPhysicalDeviceImageFormatProperties2(), we were
previously requesting the properties of the first fallback format (e.g.
VK_FORMAT_R8_UNORM for VK_FORMAT_G8_B8R8_2PLANE_420_UNORM) instead of
the actual format in use.
We don’t do anything with it afterwards, but there is no reason to keep
querying the wrong format.
Fix chroma_location being cleared by setrange and setfield filters.
This was forgotten in 201f1cba15.
Signed-off-by: Tobias Rapp <t.rapp@noa-archive.com>
It was possible for the buffer pointers for the last tile to go over the
end of the unit buffer leading to a read overflow during decode of the
macroblock layer. Check all tile component sizes to prevent this case
and also catch related tile size mismatch errors earlier.
Halt tile component decoding at the first entropy error (this will be a
desync and is not recoverable). If any tile components contain errors
then discard the frame unless the output-corrupt flag is set.
Also fixes CID 1646764, which is the error case where the tile component
is too large for get_bits to handle.
No reason to build the exact same table once per decoding thread.
Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: James Almer <jamrial@gmail.com>
Abort as soon as we're done reading the slice header instead of running extra checks
that assume slice data may follow.
Signed-off-by: James Almer <jamrial@gmail.com>
Prevents printing bogus errors about the value being 0, when in fact we
overread the available slice buffer.
Signed-off-by: James Almer <jamrial@gmail.com>
Since GCC 10 and llvm.org Clang 11, -fno-common is the default.
However Apple's Xcode Clang hasn't followed suit yet, and still
defaults to -fcommon.
Compiling with -fcommon causes uninitialized global variables to
be treated as "common" (which allows multiple object files to have
similar definitions).
Common variables seem to have the issue that their intended alignment
isn't signaled, so the linker assumes that they may need alignment
according to their full size.
With large global tables, this can lead to linker warnings like
this, with Xcode 16.3:
ld: warning: reducing alignment of section __DATA,__common from 0x8000 to 0x4000 because it exceeds segment maximum alignment
This can be reproduced with a small snippet like this:
char table[16385];
int main(int argc, char* argv[]) { return 0; }
Compiling with -fno-common avoids this issue and warning, and
matches the default behaviour of other compilers. (Compiling with
-fno-common also avoids the risk of accidentally accepting
duplicate definitions of global variables, as long as they are
uninitialized.)
Signed-off-by: Martin Storsjö <martin@martin.st>
This allows the user to set only the one that is needed to ALL or a
specific "wrong" extension like html
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
An identical call exists in ffmpeg.c
With POSIX/glibc, stderr is already unbuffered (or line-buffered when
a terminal is connected), but not in case of MSVCRT.
Explicitly calling setvbuf() like in this commit, makes the Windows
runtime behave like POSIX, giving the same “print immediately” behavior.
Signed-off-by: softworkz <softworkz@hotmail.com>
When viewing logs, there are situations where it is not entirely
clear whether ffmpeg CLI has exited gracefully. The two primary cases
are
- A crash/segfault has occured
Windows for example doesn't output any message to the calling shell
- The process has been terminated (e.g. killed externally)
Printing a message on exit provides a reliable indication that the
process has exited normally.
Printing the exit code is useful as it usually remains invisible
and unnoticed by users running FFmpeg from a shell.
Signed-off-by: softworkz <softworkz@hotmail.com>
Commit c8140fe732 had introduced
a check for value_len > UINT16_MAX.
As a consequence, attached images of sizes larger than UINT16_MAX
could no longer be read.
This is a minimal fix of the regression, avoiding the controversies
of my earlier submission regarding int type handling in asfdec.
Signed-off-by: softworkz <softworkz@hotmail.com>
- Change precision to 6 digits to align with other printed times
- Change label to just "Start"
- Add 's' unit to format 'start' value for consistency
Signed-off-by: softworkz <softworkz@hotmail.com>
This is a raw demuxer, it should not read codec level information and export it
as container level information.
The generic demux code will use the recently introduced parser to take care of
that.
Signed-off-by: James Almer <jamrial@gmail.com>
This demuxers reads encapsulation bytes before reading codec data into the
output packets, so take such offset into consideration.
Signed-off-by: James Almer <jamrial@gmail.com>