There are already several places in the codebase that match desc->name
against "xyz", and many downstream clients replicate this behavior.
I have no idea why this is not just a flag.
Motivated by my desire to add yet another check for XYZ to the codebase,
and I'd rather not keep copy/pasting a string comparison hack.
They are intended as replacements for avcodec_enum_to_chroma_pos()
and avcodec_chroma_pos_to_enum().
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Since introducing the various packed formats used by VAAPI (and p012),
we've noticed that there's actually a gap in how
av_find_best_pix_fmt_of_2 works. It doesn't actually assign any value
to having the same bit depth as the source format, when comparing
against formats with a higher bit depth. This usually doesn't matter,
because av_get_padded_bits_per_pixel() will account for it.
However, as many of these formats use padding internally, we find that
av_get_padded_bits_per_pixel() actually returns the same value for the
10 bit, 12 bit, 16 bit flavours, etc. In these tied situations, we end
up just picking the first of the two provided formats, even if the
second one should be preferred because it matches the actual bit depth.
This bug already existed if you tried to compare yuv420p10 against p016
and p010, for example, but it simply hadn't come up before so we never
noticed.
But now, we actually got a situation in the VAAPI VP9 decoder where it
offers both p010 and p012 because Profile 3 could be either depth and
ends up picking p012 for 10 bit content due to the ordering of the
testing.
In addition, in the process of testing the fix, I realised we have the
same gap when it comes to chroma subsampling - we do not favour a
format that has exactly the same subsampling vs one with less
subsampling when all else is equal.
To fix this, I'm introducing a small score penalty if the bit depth or
subsampling doesn't exactly match the source format. This will break
the tie in favour of the format with the exact match, but not offset
any of the other scoring penalties we already have.
I have added a set of tests around these formats which will fail
without this fix.
These have mostly been added because of FF_API_*; yet when these were
removed, removing the header has been forgotten.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This is needed because of 32bit float formats (which are difficult to
store in 16bits)
This also fixes undefined behavior found by fate
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
PSEUDOPAL pixel formats are not paletted, but carried a palette with the
intention of allowing code to treat unpaletted formats as paletted. The
palette simply mapped the byte values to the resulting RGB values,
making it some sort of LUT for RGB conversion.
It was used for 1 byte formats only: RGB4_BYTE, BGR4_BYTE, RGB8, BGR8,
GRAY8. The first 4 are awfully obscure, used only by some ancient bitmap
formats. The last one, GRAY8, is more common, but its treatment is
grossly incorrect. It considers full range GRAY8 only, so GRAY8 coming
from typical Y video planes was not mapped to the correct RGB values.
This cannot be fixed, because AVFrame.color_range can be freely changed
at runtime, and there is nothing to ensure the pseudo palette is
updated.
Also, nothing actually used the PSEUDOPAL palette data, except xwdenc
(trivially changed in the previous commit). All other code had to treat
it as a special case, just to ignore or to propagate palette data.
In conclusion, this was just a very strange old mechnaism that has no
real justification to exist anymore (although it may have been nice and
useful in the past). Now it's an artifact that makes the API harder to
use: API users who allocate their own pixel data have to be aware that
they need to allocate the palette, or FFmpeg will crash on them in
_some_ situations. On top of this, there was no API to allocate the
pseuo palette outside of av_frame_get_buffer().
This patch not only deprecates AV_PIX_FMT_FLAG_PSEUDOPAL, but also makes
the pseudo palette optional. Nothing accesses it anymore, though if it's
set, it's propagated. It's still allocated and initialized for
compatibility with API users that rely on this feature. But new API
users do not need to allocate it. This was an explicit goal of this
patch.
Most changes replace AV_PIX_FMT_FLAG_PSEUDOPAL with FF_PSEUDOPAL. I
first tried #ifdefing all code, but it was a mess. The FF_PSEUDOPAL
macro reduces the mess, and still allows defining FF_API_PSEUDOPAL to 0.
Passes FATE with FF_API_PSEUDOPAL enabled and disabled. In addition,
FATE passes with FF_API_PSEUDOPAL set to 1, but with allocation
functions manually changed to not allocating a palette.
* commit '2268db2cd052674fde55c7d48b7a5098ce89b4ba':
lavu: Drop the {minus,plus}1 suffix from AVComponentDescriptor fields
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
* commit '6b3ef7f080293956b2e5212b83135c6b051212e9':
lavu: Remove bit packing from AVComponentDescriptor
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
* commit 'b8b5d8274471129f122858bc74ad09284dae6ab7':
lavu: extend size of the AVPixFmtDescriptor.flags field
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
There is no practical benefit in having this structure elements
bit packed given the size of the structure and its usage.
Change types from uint16_t (packed) to plain int in order to simplify
modifying the structure and accessing its fields.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
* commit '2f8cbbc962dfc0dc1dd0a90b2cd6c21266380f51':
lavu: Drop deprecated external access to AVPixFmtDescriptor table
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
* commit '183db02a51a422568084b113a7571c845ca68622':
lavu: Drop deprecated old_pix_fmt.h and related code
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>