This reduces sibilance distortion when sibilance and bass are
present at the same time. Bringing the protection of high
frequencies up to about the same level as for low frequencies
should also make the quality less dependent on the frequency
balance of the playback system.
Signed-off-by: Jason Jang <jcj83429@gmail.com>
Ignore more samples that are near the edge of the block. The reason
is that the filtering tends to cause these samples to go above the
window more than the samples near the middle. If these samples are
included in the unwindowed peak estimation, the peak can be
overestimated. Because the block is windowed again before
overlapping, overshoots near the edge of the block are not very
important.
0.1 is the value from the version originally contributed to calf.
Signed-off-by: Jason Jang <jcj83429@gmail.com>
With a complex FFT instead of real FFT, the negative frequencies
are not dropped from the spectrum output, so they need to be scaled
when the positive frequencies are scaled. The location of the top
bin is also different.
Signed-off-by: Jason Jang <jcj83429@gmail.com>
This can fill VAProcPipelineParameterBuffer correctly and make the
pipeline works.
Reviewed-by: Soft Works <softworkz@hotmail.com>
Signed-off-by: Fei Wang <fei.w.wang@intel.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Overlay one video on the top of another.
It takes two inputs and has one output. The first input is the "main" video on
which the second input is overlaid. This filter requires same memory layout for
all the inputs.
An example command to use this filter to overlay overlay.mp4 at the top-left
corner of the main.mp4:
ffmpeg -init_hw_device vaapi=foo:/dev/dri/renderD128 \
-hwaccel vaapi -hwaccel_device foo -hwaccel_output_format vaapi -c:v h264 -i main.mp4 \
-hwaccel vaapi -hwaccel_device foo -hwaccel_output_format vaapi -c:v h264 -i overlay.mp4 \
-filter_complex "[0:v][1:v]overlay_vaapi=0:0:100💯0.5[t1]" \
-map "[t1]" -an -c:v h264_vaapi -y out_vaapi.mp4
Signed-off-by: U. Artie Eoff <ullysses.a.eoff@intel.com>
Signed-off-by: Xinpeng Sun <xinpeng.sun@intel.com>
Signed-off-by: Zachary Zhou <zachary.zhou@intel.com>
Signed-off-by: Fei Wang <fei.w.wang@intel.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
To trigger this bug, use `paletteuse=dither=bayer:bayer_scale=0`; you will see
that adjacent pixel lines will use the same dither pattern, instead of being
shifted from each other by 32 units (0x20).
One way to demostrate the bug is:
$ convert -size 64x256 gradient:black-white -rotate 270 grad.png
$ echo 'P2 2 1 255 0 255' > bw.pnm
$ ffmpeg -i grad.png -filter_complex 'movie=bw.pnm,scale=256x1[bw]; [0:v][bw]paletteuse=dither=bayer:bayer_scale=0' gradbw.png
Previously: https://www.rm.cloudns.org/img/uploaded/0bd152c11b9cd99e5945115534b1bdde.png
Now: https://www.rm.cloudns.org/img/uploaded/89caaa5e36c38bc2c01755b30811f969.png
This was caused by passing inconsistent color vs (a,r,g,b) parameters to
color_get(), and NBITS being 5 meaning actually hitting the same cache node
does happen in this case, but ONLY if bayer_scale is zero.
The fix is passing the correct color value to color_get().
Also added a previous-failing FATE test; image comparison of the first frame:
Previously: https://www.rm.cloudns.org/img/uploaded/d0ff9db8d8a7d8a3b8b88bbe92bf5fed.png
Now: https://www.rm.cloudns.org/img/uploaded/a72389707e719b5cd1c58916a9e79ca8.png
(on this less synthetic test image, the bug basically causes noise from cache
hits vs misses)
Tested: FATE passes, which exercises this filter but at the default bayer_scale.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
This resulted in a dimmed tonemapping due to bad resulting luma
calculation.
Found by: Derek Buitenhuis
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
For DeinterlacingBob mode with rate=field, the frame number of output
should equal 2x input total since only intra deinterlace is used.
Currently for "backward_ref = 0, rate = field", extra_delay is
introduced. Due to the async without flush, frame number of output is
[expected_number - 2].
Specifically, if the input only has 1 frame, the output will be empty.
Add deint_vaapi_request_frame for deinterlace_vaapi, send NULL frame
to flush the queued frame.
For 1 frame input in Bob mode with rate=field,
before patch: 0 frame;
after patch: 2 frames;
ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128
-hwaccel_output_format vaapi -i input.h264 -an -vf
deinterlace_vaapi=mode=bob:rate=field -f null -
Tested-by: Mark Thompson <sw@jkqxz.net>
Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: Linjie Fu <linjie.fu@intel.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
This was accidentally comparing s->colorspace against out->colorspace,
which is wrong - the intent was to compare in->colorspace against
out->colorspace.
We also forgot to strip mastering metadata. Finally, the order is sort
of wrong - we should strip this side data *before* process_frames,
because otherwise it may end up being seen and used by libplacebo.
Signed-off-by: Niklas Haas <git@haasn.dev>
Commit 8b83dad825 introduced a
regression in a way that scaling via vpp_qsv doesn't work any longer
for devices with an MSDK runtime version lower than 1.19. This is true
for older CPUs which are stuck at 1.11.
The commit added checks for the compile-sdk version but it didn't test
for the runtime version.
Signed-off-by: softworkz <softworkz@hotmail.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
This commit adds a blend_vulkan filter and a normal blend mode, and
reserves support for introducing the blend modes in the future.
Use the commands below to test: (href: https://trac.ffmpeg.org/wiki/Blend)
I. make an image for test
ffmpeg -f lavfi -i color=s=256x256,geq=r='H-1-Y':g='H-1-Y':b='H-1-Y' -frames 1 \
-y -pix_fmt yuv420p test.jpg
II. blend in sw
ffmpeg -i test.jpg -vf "split[a][b];[b]transpose[b];[a][b]blend=all_mode=normal,\
pseudocolor=preset=turbo" -y normal_sw.jpg
III. blend in vulkan
ffmpeg -init_hw_device vulkan -i test.jpg -vf "split[a][b];[b]transpose[b];\
[a]hwupload[a];[b]hwupload[b];[a][b]blend_vulkan=all_mode=normal,hwdownload,\
format=yuv420p,pseudocolor=preset=turbo" -y normal_vulkan.jpg
Signed-off-by: Wu Jianhua <jianhua.wu@intel.com>
libplacebo supports automatic dolby vision application, but it requires
us to switch to a new API. Also add some logic to strip the dolby vision
metadata from the output frames in any case where we end up changing the
colorimetry.
The libplacebo dependency bump is justified because neither 184 nor 192
are part of any stable libplacebo release, so users have to build from
git anyways for this filter to exist.
Signed-off-by: Niklas Haas <git@haasn.dev>
- No longer mixes u8 and u16 component accesses (this was UB)
- De-duplicated 8->16 conversion
- De-duplicated component -> plane+offset conversion
- De-duplicated planar + packed RGB
- No longer calls ff_fill_rgba_map
- Removed redundant comp_mask data member
- RGB0 and related formats no longer write an alpha value to the 0 byte
- Non-planar YA formats now work correctly
- High-bit-depth semi-planar YUV now works correctly
Same outputs, but computed instead of statically known, so new formats will be
supported more easily. Asserts in place to ensure we update this if we add
anything incompatible with its logic.
This allows to remove the spurious dependencies of mpegvideo encoders
on error_resilience; some other components that do not use mpegvideo
to its fullest turned out to not need it either.
Adding a new CONFIG_EXTRA needs a reconfigure to take effect.
In order to force this a few unnecessary headers from lavfi/allfilters.c
have been removed.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Fix warning caused by this field changing from uint64_t to uint16_t.
Signed-off-by: Niklas Haas <git@haasn.dev>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This is done a second time for 5.0 because master was
merged into 5.0 so that it contains the recent DOVI additions.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In case of shared builds, some object files containing tables
are currently duplicated into other libraries: log2_tab.c,
golomb.c, reverse.c. The check for whether this is duplicated
is simply whether CONFIG_SHARED is true. Yet this is crude:
E.g. libavdevice includes reverse.c for shared builds, but only
needs it for the decklink input device, which given that decklink
is not enabled by default will be unused in most libavdevice.so.
This commit changes this by making it more explicit about what
to duplicate from other libraries. To do this, two new Makefile
variables were added: SHLIBOBJS and STLIBOBJS. SHLIBOBJS contains
the objects that are duplicated from other libraries in case of
shared builds; STLIBOBJS contains stuff that a library has to
provide for other libraries in case of static builds. These new
variables provide a way to enable/disable with a finer granularity
than just whether shared builds are enabled or not. E.g. lavd's
Makefile now contains: SHLIBOBJS-$(CONFIG_DECKLINK_INDEV) += reverse.o
Another example is provided by the golomb tables. These are provided
by lavc for static builds, even if one uses a build configuration
that makes only lavf use them. Therefore lavc's Makefile contains
STLIBOBJS-$(CONFIG_MXF_MUXER) += golomb.o, whereas lavf's Makefile
has a corresponding SHLIBOBJS-$(CONFIG_MXF_MUXER) += golomb_tab.o.
E.g. in case the MXF muxer is the only component needing these tables
only libavformat.so will contain them for shared builds; currently
libavcodec.so does so, too.
(There is currently a CONFIG_EXTRA group for golomb. But actually
one would need two groups (golomb_avcodec and golomb_avformat) in
order to know when and where to include these tables. Therefore
this commit uses a Makefile-based approach for this and stops
using these groups for the users in libavformat.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
On 64 bit Operating System, sizeof(ScaleVulkanContext):
reduce from 2400 to 2392 on Linux
reduce from 2416 to 2408 on Windows
Signed-off-by: Wu Jianhua <jianhua.wu@intel.com>
The following command is on how to apply passthrough option:
ffmpeg -init_hw_device vulkan -i input.264 -vf hwupload=extra_hw_frames=16,transpose_vulkan=passthrough=landscape,hwdownload,format=yuv420p output.264
Signed-off-by: Wu Jianhua <jianhua.wu@intel.com>
- Ensure the yadif .metal compiles when targeting any Metal runtime version
- Use some preprocessor awkwardness to ensure Core Video's Metal-specific
functionality is exposed regardless of our deployment target (this works
around what seems to be an SDK header bug, filed as FB9816002)
- Ensure all direct references to Metal functions and classes are gated
behind runtime version checks (this satisfies clang's deployment-target
violation warnings provided by -Wunguarded-availability).
They test libavfilter internal API, so they should be libavfilter
test programs (which implies: linked statically to libavfilter
to access internal APIs and linked normally (statically or dynamically
depending upon the build configuration) against all the other libs).
Right now, they are always linked statically against all libs,
which is a significant size waste compared to shared libs as all
of libavcodec has been pulled in despite not being really used.
This also leads to linking failures on systems for which av_export_avutil
is intended: libavcodec does not expect to be linked statically
against the library providing avpriv_(cga|vga16)_font in this case.
This is fixed by this commit.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
deinterlaces CVPixelBuffers, i.e. AV_PIX_FMT_VIDEOTOOLBOX frames
for example, an interlaced mpeg2 video can be decoded by avcodec,
uploaded into a CVPixelBuffer, deinterlaced by Metal, and then
encoded to h264 by VideoToolbox as follows:
ffmpeg \
-init_hw_device videotoolbox \
-i interlaced.ts \
-vf hwupload,yadif_videotoolbox \
-c:v h264_videotoolbox \
-b:v 2000k \
-c:a copy \
-y progressive.ts
(note that uploading AVFrame into CVPixelBuffer via hwupload
requires 504c60660d)
this work is sponsored by Fancy Bits LLC
Reviewed-by: Ridley Combs <rcombs@rcombs.me>
Reviewed-by: Philip Langdale <philipl@overt.org>
Signed-off-by: Aman Karmani <aman@tmm1.net>
This was renamed upstream quite a while ago (v3.112.0). Rename the
option name as well for consistency (and expand the description just
slightly).
Signed-off-by: Niklas Haas <git@haasn.dev>
Support for mapping/unmapping hardware frames has been added into
libplacebo itself, so we can scrap this code in favor of using the new
functions. This has the additional benefit of being forwards-compatible
as support for more complicated frame-related state management is added
to libplacebo (e.g. mapping dolby vision metadata).
It's worth pointing out that, technically, this would also allow
`vf_libplacebo` to accept, practically unmodified, other frame types
(e.g. vaapi or drm), or even software input formats. (Although we still
need a vulkan *device* to be available)
To keep things simple, though, retain the current restriction to vulkan
frames. It's possible we could rethink this in a future commit, but for
now I don't want to introduce any more potentially breaking changes.
The following command is on how to apply cclock option:
ffmpeg -init_hw_device vulkan -i input.264 -vf \
hwupload=extra_hw_frames=16,transpose_vulkan=dir=cclock,hwdownload,format=yuv420p \
output.264
The following command is on how to apply clock_flip option:
ffmpeg -init_hw_device vulkan -i input.264 -vf \
hwupload=extra_hw_frames=16,transpose_vulkan=dir=clock_flip,hwdownload,format=yuv420p \
output.264
The following command is on how to apply clock option:
ffmpeg -init_hw_device vulkan -i input.264 -vf \
hwupload=extra_hw_frames=16,transpose_vulkan=dir=clock,hwdownload,format=yuv420p \
output.264
Signed-off-by: Wu Jianhua <jianhua.wu@intel.com>
The following command is on how to apply transpose_vulkan filter:
ffmpeg -init_hw_device vulkan -i input.264 -vf \
hwupload=extra_hw_frames=16,transpose_vulkan,hwdownload,format=yuv420p output.264
Signed-off-by: Wu Jianhua <jianhua.wu@intel.com>
These lists have size fields since
e48ded8551.
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This filter flips the input video both horizontally and vertically
in one compute pipeline, and it's no need to use two pipelines for
hflip_vulkan,vflip_vulkan anymore.
Signed-off-by: Wu Jianhua <jianhua.wu@intel.com>
Fixes a mistake in dea673d0d5.
GCC emitted a -Wint-in-bool-context warning because of that.
Reviewed-by: Soft Works <softworkz@hotmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The following command is on how to apply vflip_vulkan filter:
ffmpeg -init_hw_device vulkan -i input.264 -vf hwupload=extra_hw_frames=16,vflip_vulkan,hwdownload,format=yuv420p output.264
Signed-off-by: Wu Jianhua <jianhua.wu@intel.com>
The following command is on how to apply hflip_vulkan filter:
ffmpeg -init_hw_device vulkan -i input.264 -vf hwupload=extra_hw_frames=16,hflip_vulkan,hwdownload,format=yuv420p output.264
Signed-off-by: Wu Jianhua <jianhua.wu@intel.com>
The issue is that libavfilter depends on libavcodec, and when doing a
static build, if libavcodec also includes "libavfilter/vulkan.c", then
during link-time, compiling programs will fail as there would be multiple
definitions of the same symbols in both libavfilter and libavcodec's
object files.
Linkers are, however, more permitting if both files that include
a common file that's used as a template are one-to-one identical.
Hence, to make both files the same in the future, export all avfilter
specific functions to a separate file.
There is some work in progress to make templated files like this be
compiled only once, so this is not a long-term solution.
This also removes a macro that could be used to toggle SPIRV compilation
capability on #include-time, as this could cause the files to be different.
Whether failed or not, the block of codes labeled with fail should
be always run to ensure the av_free(kernel_def) is executed.
Signed-off-by: Wu Jianhua <jianhua.wu@intel.com>
This commit adds a powerful and customizable gblur Vulkan filter,
which provides a maximum 127x127 kernel size of Gaussian Filter.
The size could be adjusted by requirements on quality or performance.
The following command is on how to apply gblur_vulkan filter:
ffmpeg -init_hw_device vulkan -i input.264 -vf hwupload=extra_hw_frames=16,gblur_vulkan,hwdownload,format=yuv420p output.264
Signed-off-by: Wu Jianhua <jianhua.wu@intel.com>
There is no reason to wrap them in #ifndef guards, they should only be
defined here and nowhere else. The define guards just add the
possibility to accidentally use the same FF_API name in different
libraries.
Unfortunately pad_len and pad_dur behaviour was different if 0 was specified,
pad_dur handled 0 duration as infinity, for pad_len, infinity was -1.
Let's make the behaviour consistent by handling 0 duration for pad_dur and
whole_dur as indeed 0 duration. This somewhat changes the behaviour of the
filter if 0 was explicitly specified, but deprecating the old option and adding
a new for the corrected behaviour seemed a bit overkill. So let's document the
change instead.
Signed-off-by: Marton Balint <cus@passwd.hu>
In particular, allows users to go all the way up to PL_LOG_TRACE if
desired. (While also avoiding some potentially unnecessary callbacks for
filtered messages, including e.g. the CPU cost of printing out shader
sources)
Response to runtime log level changes by updating it once per
filter_frame(), which should hopefully be often enough.
This filter conceptually maps the libplacebo `pl_renderer` API into
libavfilter, which is a high-level image rendering API designed to work
with an RGB pipeline internally. As such, there's no way to avoid e.g.
chroma interpolation with this filter, although new versions of
libplacebo support outputting back to subsampled YCbCr after processing
is done.
That being said, `pl_renderer` supports automatic integration of the
majority of libplacebo's shaders, ranging from debanding to tone
mapping, and also supports loading custom mpv-style user shaders, making
this API a natural candidate for getting a lot of functionality out of
relatively little code.
In the future, I may approach this problem either by rewriting this
filter to also support a non-renderer codepath, or by upgrading
libplacebo's renderer to support a full YCbCr pipeline.
This unfortunately requires a very new version of libplacebo (unreleased
at time of writing) for timeline semaphore support. But the amount of
boilerplate needed to hack in backwards compatibility would have been
very unreasonable.
Finally, this is as close to usable as it gets for glslang.
Much faster to compile as well, and eliminates the need for a C++
compiler, which is great.
Also, changes to the resource limits won't break users, as we
can use designated initializers in C90.
This simplifies and makes queue family picking simpler and more robust.
The requirements on the device context are relaxed. They made no sense
in the first place.
The video encode/decode extension is still in beta, at least on paper,
but I really doubt they'd change needing a separate queue family.
summary for the adjustments:
1, remove the extra "," in the ,}
...{0.2004,0.3001,0.4008,0.5005,0.6002,0.7009,0.8006,0.9013,}
to
...{0.2004,0.3001,0.4008,0.5005,0.6002,0.7009,0.8006,0.9013}
2, add "," between the } and new field
} fraction_bright_pixels
to
}, fraction_bright_pixels
3, remove the extra space between "} }"
...{0.2004,0.3001,0.4008,0.5005,0.6002,0.7009,0.8006,0.9013,} }
to
...{0.2004,0.3001,0.4008,0.5005,0.6002,0.7009,0.8006,0.9013,}}
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
The condition (pos < len) is always true and the
rest of the OpenCL program code would not be read, while
the maximum number of "rb" is "len - pos - 1", and then, the
maximum number of the "pos" is "len - 1".
Fixes: trac.ffmpeg.org/ticket/9217