Note that this changes the code to work the same way as other protocols where
an URL parameter can override an AVOption.
Signed-off-by: Marton Balint <cus@passwd.hu>
This allows choosing whether the `fit_mode` merely controls the placement
of the image within the output resolution, or whether the output resolution
is also adjusted according to the given `fit_mode`.
The semantics of these keywords are well-defined by the CSS 'object-fit'
property. This is arguably more user-friendly and less obtuse than the
existing `normalize_sar` and `pad_crop_ratio` options. Additionally, this
comes with two new (useful) behaviors, `none` and `scale_down`, neither of
which map elegantly to the existing options.
One additional benefit of this option is that, unlike `normalize_sar`, it
does *not* also imply `reset_sar`; meaning that users can now choose to
have an anamorphic base layer and still have the overlay images scaled to fit
on top of it according to the chosen strategy.
See-Also: https://drafts.csswg.org/css-images/#the-object-fit
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
I have redacted the exact location of the FFmpeg server as writing
that in public seems just a bad idea
Add a new flag to the vf_colorspace filter which provides the user an
option to clamp the linear and delinear transfer characteristics LUT
values to the [0, 1] represented range. This helps constrain the
potential value range when converting between colorspaces.
Certain colors when going through the conversion can result in out of
gamut colors after the rotation. The colorspace filter allows that with
the extended range. The added clamping just keeps the colors within the
[0, 1) range rather than using that extended range. I'm not enough of a
color scientist to say which is correct, but there are certain
situations where we would prefer to keep the colors in gamut.
The example I have is:
A solid color image of 8-bit YUV: Y=157, U=164, V=98.
Specify the input as:
Input range: MPEG
In color matrix: BT470BG
In color primaries: BT470M
In color transfer characteristics: Gamma 28
Output as:
Out color range: JPEG
Out color matrix: BT.709
Out color primaries: BT.709
Out color transfer characteristics: BT.709
During the calculation you get:
Input YUV: y=157, u=164, v-98
Post-yuv2rgb BT.470BG: r=0.456055, g=0.684152, b=0.928606
Post-apply gamma28 linear LUT: r=0.110979, g=0.345494, b=0.812709
Post-color rotation BT.470M to BT.709: r=-0.04161, g=0.384626, b=0.852400
Post-apply Rec.709 delinear LUT: r=-0.16382, g=0.615932, b=0.923793
Post-rgb2yuv Rec.709 matrix: y=120, u=190, v=25
Where with this change, the delinear LUT output would be clamped to 0,
so the result would be:
r=0.000000, g=0.612390, b=0.918807 and a final output of
y=129, u=185, v=46
As for the long and av_clip64, this was just because lrint returned a
long, so I left it as that and then used av_clip64 to the [0,1) range to
avoid overflow. But re-reading, it looks like av_clip_int16 would
downcast that long to int anyway so the possibility of overflow already
existed there. I've put it back to int just to match the existing
behavior.
enc_pkt->size is 0 after av_packet_unref, which makes the check invalid.
Fix regression from 3e4bfff2.
Co-Authored-by: Jin Bo <jinbo@loongson.cn>
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Instead of just 2 files, generalize this filter to support crossfading
arbitrarily many files. This makes the filter essentially operate similar
to the `concat` filter, chaining multiple files one after another.
Aside from just adding more input pads, this requires rewriting the
activate function. Instead of a finite state machine, we keep track of the
currently active input index; and advance it only once the current input is
fully exhausted.
This results in arguably simpler logic overall.
This patch adds support for the texture array feature
used by AMD boards in the D3D12 HEVC encoder.
In texture array mode, a single texture array is shared for all
reference and reconstructed pictures using different subresources.
The implementation ensures compatibility
and has been successfully tested on AMD, Intel, and NVIDIA GPUs.
Chooses the desired output alpha mode. Note that this depends on
an upstream version of libplacebo new enough to respect the corresponding
AVFrame field in pl_map_avframe_ex.
Following in the footsteps of the previous commit, this commit adds the
new fields to AVCodecContext so we can start properly setting it on codecs,
as well as limiting the list of supported options to detect a format mismatch
during encode.
This commit also sets up the necessary infrastructure to start using the
newly added field in all codecs.
We need a filter that can premultiply and unpremultiply the alpha channel
dynamically, on demand, in response to the negotiated alpha mode, analogous
to how vf_scale operates. Introduce a new filter "vf_premultiply_dynamic"
that accomplishes this.
FFmpeg currently handles alpha in a quasi-arbitrary way. Some filters/codecs
assume alpha is premultiplied, others assume it is independent. If there is
to be any hope for order in this chaos, we need to start by defining an enum
for the possible range of values.
Give users and developers a way to opt in to the new format conversion code,
and more code from the swscale rewrite in general, even while development is
still ongoing.
"glob_sequence" was deprecated since 2012. This also changes the default pattern
to "sequence", because "glob_sequence" was also the default.
Signed-off-by: Marton Balint <cus@passwd.hu>
This makes the functions extensible, as future behavior change flags can be
introduced.
This is strictly speaking not an API break. Only if a user was setting
recursive to anything other than 1 it would now behave differently, but given
these functions have been in the tree for only a few days, the chances for that
are practically zero.
Signed-off-by: James Almer <jamrial@gmail.com>
It makes sense to treat the presence of a frame duration and the presence
of frame rate metadata identically - because both convey effectively the same
amount of information.
In f121d95 and fa110c3 respectively, this information was stripped by default,
originally to work-around bugs when changing the PTS information of a stream
being fed to some encoders. (See https://trac.ffmpeg.org/ticket/10886)
Later, commit 959b799c restored the ability to preserve the frame rate
medatata via the `strip_fps` option, but this option did not extend to also
include the frame duration.
This commit resolves the scenario by making `frame_rate` and `duration`
handled in a consistent manner, so that the frame rate information is
generally preserved unless explicitly stripped by the user.
While it does regress the exact invocation presented in the trac ticket unless
using `strip_fps=yes`, I consider this an acceptable trade-off, especially in
light of the fact that the `fps` filter also exists and is arguably the better
tool for the task at hand.
Many of these additions are in separate commits in one set, so in the
interest of clarity, the API changes are all documented in one commmit
here.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
It can be useful to know if the alpha plane consists of fully opaque
pixels or not, in which case it can e.g. safely be stripped.
This only requires a very minor modification to the AVX2 routines, adding
an extra AND on the read alpha value with the reference alpha value, and a
single extra cheap test per line.
detect_alpha_8_full_c: 2849.1 ( 1.00x)
detect_alpha_8_full_avx2: 260.3 (10.95x)
detect_alpha_8_full_avx512icl: 130.2 (21.87x)
detect_alpha_8_limited_c: 8349.2 ( 1.00x)
detect_alpha_8_limited_avx2: 756.6 (11.04x)
detect_alpha_8_limited_avx512icl: 364.2 (22.93x)
detect_alpha_16_full_c: 1652.8 ( 1.00x)
detect_alpha_16_full_avx2: 236.5 ( 6.99x)
detect_alpha_16_full_avx512icl: 134.6 (12.28x)
detect_alpha_16_limited_c: 5263.1 ( 1.00x)
detect_alpha_16_limited_avx2: 797.4 ( 6.60x)
detect_alpha_16_limited_avx512icl: 400.3 (13.15x)