The fixed-point AAC decoder is the only user of the fixed-point sinewin
tables from sinewin; and it only uses a few of them (about 10% when
counting by size). This means that guarding initializing these tables by
an AVOnce (as done in 3719122065) is
unnecessary for them. Furthermore the array of pointers to the
individual arrays is also unneeded.
Therefore this commit moves these tables directly into aacdec_fixed.c;
this is done by ridding the original sinewin.h and sinewin_tablegen.h
headers completely of any fixed-point code at the cost of a bit of
duplicated code (the alternative is an ugly ifdef-mess).
This saves about 58KB from the binary when using hardcoded tables (as
these tables are hardcoded in this scenario); when not using hardcoded
tables, most of these savings only affect the .bss segment, but the rest
(< 1KB) contains relocations (i.e. savings in .data.rel.ro).
Reviewed-by: Lynne <dev@lynne.ee>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Setting the defaults for $arch happens only later, so
the current code would not set AS correctly if --arch
was not specified on the command-line.
Fix it by adding an explicit fallback to $arch_default.
Signed-off-by: Josh Dekker <josh@itanimul.li>
avcodec has no facilities to generate timestamps properly from
output frame numbers (and it would be wrong for VFR anyway),
so pass through the timestamps using rav1e's opaque user data
feature, which was added in v0.4.0.
This bumps the minimum librav1e version to 0.4.0.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
This was introduced in version 4.6. And may not exist all without an
optional package. So to prevent a hard dependency on needing the Linux
kernel headers to compile, make this optional.
Also ignore the status of the ioctl, since it may fail on older kernels
which don't support it. It's okay to ignore as its not fatal and any
serious errors will be caught later by the mmap call.
SMVJPEG stores frames as slices of a big JPEG image. The decoder is
implemented as a wrapper that instantiates a full internal MJPEG
decoder, then forwards the decoded frames with offset data pointers.
This is unnecessarily complex and fragile, not supporting useful decoder
capabilities like direct rendering.
Re-implement the decoder inside the MJPEG decoder, which is accomplished
by returning each decoded frame multiple times, setting cropping
information appropriately on each instance.
One peculiar aspect of the previous design is that since
- the smvjpeg decoder returns one frame per input packet
- there are multiple frames in each packets (the aformentioned slices)
the demuxer needs to return each packet multiple times.
This is now also eliminated - the demuxer now returns each packet
exactly once, with the duration set to the number of frames it decodes
to.
This also removes one of the last remaining internal uses of the old
video decoding API.
The manual states "there is virtually no reason to use that encoder.".
It supports less sample formats than the native encoder, is less efficient
than the native encoder and is also slower and pretty much remains untested.
libwavpack also isn't being fuzzed, which given that we plug the parameters
without any sanitizing them looks concerning.
AV1 decoder is supported on Tiger Lake+ platforms since libmfx 1.34
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Signed-off-by: Zhong Li <zhongli_dev@126.com>
Currently, every filter needs to provide code to transfer data from
AVFrame* to model input (DNNData*), and also from model output
(DNNData*) to AVFrame*. Actually, such transfer can be implemented
within DNN module, and so filter can focus on its own business logic.
DNN module also exports the function pointer pre_proc and post_proc
in struct DNNModel, just in case that a filter has its special logic
to transfer data between AVFrame* and DNNData*. The default implementation
within DNN module is used if the filter does not set pre/post_proc.
This AV1 decoder is currently only used for hardware accelerated decoding.
It can be extended into a native decoder in the future, so set its name to
"av1" and temporarily give it the lowest priority in the codec list.
Signed-off-by: Fei Wang <fei.w.wang@intel.com>
Signed-off-by: James Almer <jamrial@gmail.com>
There are compiler and runtime check for MSA and MMI.
Remove the redundant setting of MSA and MMI for cores specified by "--cpu".
Signed-off-by: Shiyou Yin <yinshiyou-hf@loongson.cn>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The latest builds of glslang introduce new libraries that need to be
linked for all symbols to be fully resolved.
This change will break building against older installations of glslang
and it's very hard to tell them apart as the library change upstream
was not accompanied by any version bump and no official release has
been made with this change it - just lots of people packaging up git
snapshots. So, apologies in advance.
The most useful feature here is the ability to automatically extract the
framebuffer format and modifiers. It also makes support for multi-plane
framebuffers possible, though none are added to the format table in this
patch.
This requires libdrm 2.4.101 (from April 2020) to build, so it includes a
configure check to allow compatibility with existing distributions. Even
with libdrm support, it still won't do anything at runtime if you are
running Linux < 5.7 (before June 2020).
To enable runtime detection for MIPS, we need to refine ffbuild
part to support buildding these feature together.
Firstly, we fixed configure, let it probe native ability of toolchain
to decide wether a feature can to be enabled, also clearly marked
the conflictions between loongson2 & loongson3 and Release 6 & rest.
Secondly, we compile MMI and MSA C sources with their own flags to ensure
their flags won't pollute the whole program and generate illegal code.
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Shiyou Yin <yinshiyou-hf@loongson.cn>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
OpenVINO is a Deep Learning Deployment Toolkit at
https://github.com/openvinotoolkit/openvino, it supports CPU, GPU
and heterogeneous plugins to accelerate deep learning inferencing.
Please refer to https://github.com/openvinotoolkit/openvino/blob/master/build-instruction.md
to build openvino (c library is built at the same time). Please add
option -DENABLE_MKL_DNN=ON for cmake to enable CPU path. The header
files and libraries are installed to /usr/local/deployment_tools/inference_engine/
with default options on my system.
To build FFmpeg with openvion, take my system as an example, run with:
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/deployment_tools/inference_engine/lib/intel64/:/usr/local/deployment_tools/inference_engine/external/tbb/lib/
$ ../ffmpeg/configure --enable-libopenvino --extra-cflags=-I/usr/local/deployment_tools/inference_engine/include/ --extra-ldflags=-L/usr/local/deployment_tools/inference_engine/lib/intel64
$ make
Here are the features provided by OpenVINO inference engine:
- support more DNN model formats
It supports TensorFlow, Caffe, ONNX, MXNet and Kaldi by converting them
into OpenVINO format with a python script. And torth model
can be first converted into ONNX and then to OpenVINO format.
see the script at https://github.com/openvinotoolkit/openvino/tree/master/model-optimizer/mo.py
which also does some optimization at model level.
- optimize at inference stage
It optimizes for X86 CPUs with SSE, AVX etc.
It also optimizes based on OpenCL for Intel GPUs.
(only Intel GPU supported becuase Intel OpenCL extension is used for optimization)
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
This contains encoder wrappers for H264, HEVC, AAC, AC3 and MP3.
This is based on top of an original patch by wm4
<nfxjfg@googlemail.com>. The original patch supported both encoding
and decoding, but this patch only includes encoding.
The patch contains further changes by Paweł Wegner
<pawel.wegner95@gmail.com> (primarily for splitting out the encoding
parts of the original patch) and further cleanup, build compatibility
fixes and tweaks for use with Qualcomm encoders by Martin Storsjö.
Signed-off-by: Martin Storsjö <martin@martin.st>
And rename it to retimeinterleave, use the pcm_rechunk bitstream filter for
rechunking.
By seperating the two functions we hopefully get cleaner code.
Signed-off-by: Marton Balint <cus@passwd.hu>
Adding the support to build FFMPEG with HW accelerated decode(nvdec) and
encode on aarch64 architecture.
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
The webm_chunk muxer requires the WebM muxer, yet it does not directly
require anything from libavformat/matroska.c (it does not even include
the corresponding header). So remove the dependency from the Makefile
and add a _select to configure.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
vf_dnn_processing.c recently changed to use swscale to trasfer data
between AVFrame and dnn model.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Linjie Fu <linjie.fu@intel.com>
Using a compiler with a different host triplet is considered
cross-compiling, even when it is for the same architecture as the
build system. With such a cross-compiler, it is still valid to
optimize builds with --cpu=host. Make the condition that aborts in
this case into a warning instead, since a cross-compiler for an
incompatible architecture will fail with -mtune=native anyway.
Signed-off-by: David Michael <fedora.dm0@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
They use non-public functions, which is unacceptable for a public API
example. Rename the example back to avio_list_dir.
This effectively reverts c84d208c27 and
767d780ec0.
Supports connecting to a RabbitMQ broker via AMQP version 0-9-1.
Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
The check_x86asm() checks would force enable these variables on success,
bypassing any --disable-* command line option.
This is important in the case of AVX512, where the relevant define is used
to choose between different values for memory alignment and strides in
some allocations.
Signed-off-by: James Almer <jamrial@gmail.com>
This commit adds a chromatic aberration filter for Vulkan that attempts to
emulate a lens chromatic aberration effect.
For a YUV frame it will instead shift the chroma channels, providing a
simple approximation.
This commit adds a Vulkan filtering infrastructure for libavfilter.
It attempts to abstract as much as possible of the Vulkan API from filters.
The way the hwcontext and the framework are designed permits for parallel,
non-CPU-blocking filtering throughout, with the exception of up/downloading
and mapping.
This commit adds the necessary code to initialize and use a Vulkan device
within the hwcontext libavutil framework.
Currently direct mapping to VAAPI and DRM frames is functional, and
transfers to CUDA and native frames are supported.
Lets hope the future Vulkan video decode extension fits well within this
framework.
SetConsoleTextAttribute used to be unavailable for Windows Store apps,
but is available to them now. But GetStdHandle still is unavailable,
thus make sure to check for both functions before using code that
requires both.
Signed-off-by: Martin Storsjö <martin@martin.st>
libx265.c references a member x265_picture.quantOffsets (for ROI
support) which was added in X265_BUILD 70. Increase the minimum libx265
version to fix compilation.
Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
Reviewed-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
"VAProcFilterParameterBufferHDRToneMapping" was defined in libva 2.4.1, which will lead to
build failure for the filter tonemap_vaapi for libva 2.3.0 with current check. This patch
is to fix this build error.
Signed-off-by: Xinpeng Sun <xinpeng.sun@intel.com>
When testing on a memory limited system, these tests consume a
significant amount of memory and can often fail if testing by running
multiple processes in parallel.
Signed-off-by: Martin Storsjö <martin@martin.st>
It performs HDR(High Dynamic Range) to SDR(Standard Dynamic Range) conversion
with tone-mapping. It only supports HDR10 as input temporarily.
An example command to use this filter with vaapi codecs:
FFMPEG -hwaccel vaapi -vaapi_device /dev/dri/renderD128 -hwaccel_output_format vaapi \
-i INPUT -vf 'tonemap_vaapi=format=p010' -c:v hevc_vaapi -profile 2 OUTPUT
Signed-off-by: Xinpeng Sun <xinpeng.sun@intel.com>
Signed-off-by: Zachary Zhou <zachary.zhou@intel.com>
Signed-off-by: Ruiling Song <ruiling.song@intel.com>
These functions aren't available when building for the restricted
UWP/WinRT/WinStore API subsets.
Normally when building in this mode, one is probably only building
the libraries, but being able to build ffmpeg.exe still is useful
(and a ffmpeg.exe targeting these API subsets still can be run
e.g. in wine, for testing).
Signed-off-by: Martin Storsjö <martin@martin.st>
fix when pkg-config fail and openssl > 1.1.0 --enable-openssl fail,
the root cause is check_lib can't found the SSL_library_init().
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: macweng <macweng@tencent.com>
This BSF takes Temporal Units split across different AVPackets and merges them
by looking for Temporal Delimiter OBUs.
Signed-off-by: James Almer <jamrial@gmail.com>
This filter accepts all the dnn networks which do image processing.
Currently, frame with formats rgb24 and bgr24 are supported. Other
formats such as gray and YUV will be supported next. The dnn network
can accept data in float32 or uint8 format. And the dnn network can
change frame size.
The following is a python script to halve the value of the first
channel of the pixel. It demos how to setup and execute dnn model
with python+tensorflow. It also generates .pb file which will be
used by ffmpeg.
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('in.bmp')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
filter_data = np.array([0.5, 0, 0, 0, 1., 0, 0, 0, 1.]).reshape(1,1,3,3).astype(np.float32)
filter = tf.Variable(filter_data)
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
y = tf.nn.conv2d(x, filter, strides=[1, 1, 1, 1], padding='VALID', name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
output = sess.run(y, feed_dict={x: in_data})
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'halve_first_channel.pb', as_text=False)
output = output * 255.0
output = output.astype(np.uint8)
imageio.imsave("out.bmp", np.squeeze(output))
To do the same thing with ffmpeg:
- generate halve_first_channel.pb with the above script
- generate halve_first_channel.model with tools/python/convert.py
- try with following commands
./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=native -y out.native.png
./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.pb:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=tensorflow -y out.tf.png
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
1. must enable low_power mode since just VDENC can be supported by iHD
driver right now
2. Coding option1 and extra_data are not supported by MSDK
3. IVF header will be inserted in MSDK by default, but it is not needed
for FFmpeg, so disable it.
Signed-off-by: Zhong Li <zhongli_dev@126.com>
Support for VDPAU accelerated VP9 decoding was added with libvdpau-1.3.
Support for the same in ffmpeg is added with this patch. Profiles
related to VDPAU VP9 can be found in latest vdpau.h present in
libvdpau-1.3. DRC clips are not supported yet due to
http://trac.ffmpeg.org/ticket/8068
Add VP9 VDPAU to list of hwaccels and supported formats
Added file vdpau_vp9.c and Modified configure to add VDPAU VP9 support.
Mapped VP9 profiles to VDPAU VP9 profiles. Populated the codec specific
params that need to be passed to VDPAU.
Signed-off-by: Philip Langdale <philipl@overt.org>
Due to the recent addition of Vulkan support to AMF, we require more
recent headers that include the new structures, which have been
available since AMF 1.4.9 released in September 2018.
Fixes Ticket #8125
Use the command ./configure with/without --disable-v4l2-m2m test.
Reviewed-by: Aman Gupta <aman@tmm1.net>
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
When compiling natively on an RPI where libomxil-bellagio-dev
was also installed, `check_headers OMX_Core.h` succeeded and
the -isystem compiler flag was never added to the build.
For non-native builds, the error message now mentions the
raspberrypi/firmware repository where the RPI specific
headers are available.
Signed-off-by: Aman Gupta <aman@tmm1.net>
When ffmpeg was streaming, multiple clients were only supported by using a
multicast destination address. An alternative was to stream to a server which
re-distributes the content. This commit adds ZeroMQ as a protocol, which allows
multiple clients to connect to a single ffmpeg instance.
Signed-off-by: Marton Balint <cus@passwd.hu>
The current code in libavfilter/af_sofalizer.c requires
mysofa_neighborhood_init_withstepdefine function, which only appeared
in libmysofa 0.7. Use this function in configure script to bail out
early if a too old libmysofa is found in the system instead of failing
at compile time.
Used a technique similar to lavc/tdsc.c for invoking the MJPEG decoder.
This commit adds support for:
- DNG tiles
- DNG tile huffman lossless JPEG decoding
- DNG 8-bpp ("packed" as dcraw calls it) decoding
- DNG color scaling [1]
- LinearizationTable tag
- BlackLevel tag
[1]: As specified in the DNG Specification - Chapter 5
Signed-off-by: Nick Renieris <velocityra@gmail.com>
Many ffmpeg + rpi compilation guides on the internet recommend
using `./configure --enable-omx --enable-omx-rpi`. This fails
to find the IL OMX headers on device because the omx require_headers
check happens first before the add_cflags in omx_rpi.
A workaround is to use `./configure --enable-omx-rpi` only, since
omx_rpi already implies omx. But because many users expect to use
existing scripts and commands, we swap the order here so omx_rpi
special cases are applied first.
In the past this wasn't an issue because users noticed the OMX_Core.h
missing error and installed libomxil-bellagio-dev. But since
76c82843cc, the rpi specific headers from /opt/vc/include/IL
are required.
Signed-off-by: Aman Gupta <aman@tmm1.net>
MSYS2 converts paths to MinGW-based applications from unix to
pseudo-windows paths on execution time.
Since there was no space between '-include' and the path, MSYS2 doesn't
detect the path properly.
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
This avoids using the CUDA SDK at all; instead, we provide a minimal
reimplementation of the basic functionality that lavfi actually uses.
It generates very similar code to what NVCC produces.
The header contains no implementation code derived from the SDK.
The function and type declarations are derived from the SDK only to the
extent required to build a compatible implementation. This is generally
accepted to qualify as fair use.
Because this option does not require the proprietary SDK, it does not require
the "--enable-nonfree" flag in configure.
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
Loongson 3A4000 and 2k1000 has supported MSA2.0.
This patch optimized SAD_UB2_UH,UNPCK_R_SH_SW,UNPCK_SB_SH and UNPCK_SH_SW with MSA2.0 instruction.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Get rid of pr dependency and write the columns strictly
alphabetical without page size considerations (POSIX
specifies 66 lines as default).
Setting the page size via pr's -l option was considered,
but as there is issue #5680 which wants to avoid pr
mainly because it's not in busybox, we chose to replace
pr instead.
Before pr would attempt to write pages, thus if a page
boundary was reached, the output looked confusing as one
couldn't see there was a new page and the alphabetical
order was disrupted when scanning down one of the columns.
This change is based on a shell implementation submitted
before by Yejun.
Possible differences to the current version using pr:
1. pr implementations should truncate items to not overflow columns;
depending on how it's done not truncating shall be better IMHO.
2. pr implementations might balance columns differently;
we use minimum number of lines and might end up not
using all columns or might have lesser entries in the
last column(s)
3. we use spaces only for padding the columns; at least the GNU pr
version on my system also by default stuffs in tabs in addition
to a single space in between columns. I don't see that this
behaviour is demanded by POSIX, though I might be very well
overlooking things. Anyway for our use case I can't see a need
for having the additional tabs, or why it would be better compared
to padding with spaces only.
Fixes output for sizes with width < column width, too.
Fixes remaining part of ticket #5680
Contributor: Guo, Yejun <yejun.guo@intel.com>
This patch is based on https://trac.ffmpeg.org/ticket/5680 provided by
Kylie McClain <somasis@exherbo.org> at Wed, 29 Jun 2016 16:37:20 -0400,
and have some changes.
contributor: Kylie McClain <somasis@exherbo.org>
contributor: avih <avihpit@yahoo.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Adding the support to build FFMPEG with HW accelerated decode and encode on PPC64
little endian architecture.
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
* commit 'c0bd865ad60da31282c5d8e1000c98366249c31e':
configure: Add -D_POSIX_C_SOURCE=200112 -D_XOPEN_SOURCE=600 for mingw as well
Merged-by: James Almer <jamrial@gmail.com>
Mingw headers have got header inline implementations of localtime_r
and gmtime_r, but only visible if certain posix thread safe functions
have been requested.
This is a preparatory step for improving the detection of those
functions.
Signed-off-by: Martin Storsjö <martin@martin.st>
Currectly just standard header path can be found,
check_type/struct will fail if vaapi is installed somewhere else.
Move them followed "check_pkg_config"
Reviewed-by: Mark Thompson <sw@jkqxz.net>
Reviewed-by: Timo Rothenpieler <timo@rothenpieler.org>
Signed-off-by: Zhong Li <zhong.li@intel.com>
Bump the minimum required version to the first one with the logger API callback.
Reviewed-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
libvpx can be compiled with the VP8 decoder and encoder disabled, and
there's no reason to force their presence if the user only wants VP9.
Signed-off-by: James Almer <jamrial@gmail.com>
autorotate is enabled by default in ffmpeg so the rotation filters
are required and will be attempted for insertion without the user's
knowledge if an input stream has rotation side-data.
With all of our existing users of cuda_sdk switched over to ffnvcodec,
we could remove cuda_sdk completely and say that we should no longer
add code that requires the full sdk, and rather insist that such code
only use ffnvcodec.
As discussed previously, the use of nvcc from the sdk is still
supported with a distinct option.
Signed-off-by: Philip Langdale <philipl@overt.org>
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
This change switches the vf_thumbnail_cuda filter from using the
full cuda sdk to using the ffnvcodec headers and loader.
Most of the change is a direct mapping, but I also switched from
using texture references to using texture objects. This is supposed
to be the preferred way of using textures, and the texture object API
is the one I added to ffnvcodec.
Signed-off-by: Philip Langdale <philipl@overt.org>
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
This change switches the vf_scale_cuda filter from using the
full cuda sdk to using the ffnvcodec headers and loader.
Most of the change is a direct mapping, but I also switched from
using texture references to using texture objects. This is supposed
to be the preferred way of using textures, and the texture object API
is the one I added to ffnvcodec.
Signed-off-by: Philip Langdale <philipl@overt.org>
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
This change switches the vf_thumbnail_cuda filter from using the
full cuda sdk to using the ffnvcodec headers and loader.
Signed-off-by: Philip Langdale <philipl@overt.org>
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
The use of nvcc to compile cuda kernels is distinct from the use of
cuda sdk libraries and linking against those libraries. We have
previously not bothered to distinguish these two cases because all
the filters that used cuda kernels also used the sdk. In the following
changes, I'm going to remove the sdk dependency from those filters,
but we need a way to ensure that nvcc is present and functioning, and
also a way to explicitly disable its use so that the filters are not
built.
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
If we enable a component but a dependant library is disabled, then the enabled
component gets silently disabled. Warning about disabled explicitly enabled components
allows configure to show the missing dependencies and if --fatal-warnings is
used it can also fail if the user wants it so.
For example if libdav1d is not availble ./configure --enable-decoder=libdav1d
succeeds but the libdav1d decoder is not be enabled. After the patch configure
will warn about this:
WARNING: Disabled libdav1d_decoder because not all dependencies are satisfied: libdav1d
Signed-off-by: Marton Balint <cus@passwd.hu>
* Outputs ASS lines with basic coloring and font scaling for each
given region.
* Sets the default style to the resolution of the subtitle plane
(for example, 960x540 / 36pt font for profile A).
* Has options to:
* Disable ruby text (which is coded as regions which have
half-height text in libaribb24).
Enabled by default as without positioning ruby text only
confuses as it is usually coded in the beginning of the decoded
subtitle line.
* Set the working directory, in which libaribb24 will read
configuration as well as into which it may save broadcast extra
symbols as PNG.
Unset by default.
The unconventional library check can be explained by the library's
current master branch being licensed as LGPLv3, but at the time of
writing the latest official release is still licensed under GPLv3.
Thus, one either has to wait for the following release, or enable
GPLv3.
DXVA2 may be enabled even when every relevant module is disabled,
which would result in the dependency generator not including its
extralibs to avcodec.
Fixes ticket #7642.
Signed-off-by: James Almer <jamrial@gmail.com>
The color fields were moved to another struct, and a way to propagate
timestamps and other input metadata was introduced, so the packet
fifo can be removed.
Add support for 12bit streams, an option to disable film grain, and
read the profile from the sequence header referenced by the ouput
picture instead of guessing based on output pix_fmt.
Signed-off-by: James Almer <jamrial@gmail.com>
Also add SIMD which works on lines because it is faster then calculating it on
8x8 blocks using pixelutils.
Signed-off-by: Marton Balint <cus@passwd.hu>
This is a cuda implementation of yadif, which gives us a way to
do deinterlacing when using the nvdec hwaccel. In that scenario
we don't have access to the nvidia deinterlacer.
Simple parser to set keyframes, frame type, structure, width, height, and pixel
format, plus stream profile and level.
Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: James Almer <jamrial@gmail.com>
Also bump the API version requirement to 10.9.5, because on olders versions
there were some reports of crashes using the undocumented, yet available
BMDDeckLinkDeviceHandle.
Signed-off-by: Marton Balint <cus@passwd.hu>
Set the minimum version to 0.35.0 (libva 1.3.0) and remove redundant
configure tests. This also allows the proprietary libmfx fork of libva,
which always shows the version number 0.99.0 (independent of the actual
version).
Hook in libklvanc and use it for output of EIA-708 captions over
SDI. The bulk of this patch is just general support for ancillary
data for the Decklink SDI module - the real work for construction
of the EIA-708 CDP and VANC line construction is done by libklvanc.
Libklvanc can be found at: https://github.com/stoth68000/libklvanc
Updated to reflect feedback from Marton Balint <cus@passwd.hu>,
Carl Eugen Hoyos <ceffmpeg@gmail.com>, Aaron Levinson
<alevinsn_dev@levland.net>, and Moritz Barsnick <barsnick@gmx.net>
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
* commit '39f3b6f3fc2b46b405b680cce3599f1b370e342d':
configure: Move add_fooflags() helper functions into canonical order
Merged-by: James Almer <jamrial@gmail.com>
* commit '5691c746cf62e69806aae1baf0a6e8252d519444':
configure: Group toolchain parameter mangling functions together
Merged-by: James Almer <jamrial@gmail.com>
* commit '25c2a27c9ec0150210d75ee5ac8ed1bfa14c1a56':
configure: Make require_cc() and require_cpp_condition() functions consistent
Merged-by: James Almer <jamrial@gmail.com>
Also make sure we set the URL context max packet size accordingly.
Based on a patch by Tudor Suciu <tudor.suciu@gmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
- Allow to add deps in any order rather than "in linking order".
- Expand deps chains as required rather than just once.
- Validate that there are no cycles.
- Validate that [after expansion] deps are limited to other fflibs.
- Remove expectation for a specific output order of unique().
Previously when adding items to <fflib>_deps, developers were
required to add them in linking order. This can be awkward and
bug-prone, especially when a list is not empty, e.g. when adding
conditional deps.
It also implicitly expected unique() to keep the last instance of
recurring items such that these lists maintain their linking order
after removing duplicate items.
This patch mainly allows to add deps in any order by keeping just
one master list in linking order, and then reordering all the
<fflib>_deps lists to align with the master list order.
This master list is LIBRARY_LIST itself, where otherwise its order
doesn't matter.
The patch also removes a limit where these deps lists were expanded
only once. This could have resulted in incomplete expanded lists,
or forcing devs to add already-deducable deps to avoid this issue.
Note: it is possible to deduce the master list order automatically
from the deps lists, but in this case it's probably not worth the
added complexity, even if minor. Maintaining one list should be OK.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
x4 - x25 faster.
check_deps() recursively enables/disables components, and its loop is
iterated nearly 6000 times. It's particularly slow in bash - currently
consuming more than 50% of configure runtime, and about 20% with other
shells.
This commit applies few local optimizations, most effective first:
- Use $1 $2 ... instead of pushvar/popvar, and same at enable_deep*
- Abort early in one notable case - empty deps, to avoid costly no-op.
- Smaller changes which do add up:
- Handle ${cfg}_checking locally instead of via enable[d]/disable
- ${cfg}_checking: test done before inprogress - x2 faster in 50%+
- one eval instead of several at the empty-deps early abort path.
- The "actual work" part is unmodified - just its surroundings.
Biggest speedups (relative and absolute) are observed with bash.
Tested-by: Michael Niedermayer <michael@niedermayer.cc>
Tested-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
Tested-by: Dave Yeo <daveryeo@telus.net>
Tested-by: Reino Wijnsma <rwijnsma@xs4all.nl>
Signed-off-by: James Almer <jamrial@gmail.com>
x4 - x10 faster.
Inside print_enabled components, the filter_list case invokes sed
about 350 times to parse the same source file and extract different
info for each arg. This is never instant, and on systems where fork is
slow (notably MSYS2/Cygwin on windows) it takes many seconds.
Change it to use sed once on the source file and set env vars with the
parse results, then use these results inside the loop.
Additionally, the cases of indev_list and outdev_list are very
infrequent, but nevertheless they're faster, and arguably cleaner, with
shell parameter substitutions than with command substitutions.
Tested-by: Michael Niedermayer <michael@niedermayer.cc>
Tested-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
Tested-by: Dave Yeo <daveryeo@telus.net>
Tested-by: Reino Wijnsma <rwijnsma@xs4all.nl>
Signed-off-by: James Almer <jamrial@gmail.com>
x50 - x200 faster.
Currently configure spends 50-70% of its runtime inside a single
function: flatten_extralibs[_wrapper] - which does string processing.
During its run, nearly 20K command substitutions (subshells) are used,
including its callees unique() and resolve(), which is the reason
for its lengthy run.
This commit avoids all subshells during its execution, speeding it up
by about two orders of magnitude, and reducing the overall configure
runtime by 50-70% .
resolve() is rewritten to avoid subshells, and in unique() and
flatten_extralibs() we "inline" the filter[_out] functionality.
Note that logically, "unique" functionality has more than one possible
output (depending on which of the recurring items is kept). As it
turns out, other parts expect the last recurring item to be kept
(which was the original behavior of uniqie()). This patch preservs
its output order.
Tested-by: Michael Niedermayer <michael@niedermayer.cc>
Tested-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
Tested-by: Dave Yeo <daveryeo@telus.net>
Tested-by: Reino Wijnsma <rwijnsma@xs4all.nl>
Signed-off-by: James Almer <jamrial@gmail.com>
Some containers, like Matroska, may propagate key frames with no Sequence
Header OBU since it's provided in extradata instead.
With this change, the Sequence Header will be appended to the packet data
before calling aom_codec_decode().
Signed-off-by: James Almer <jamrial@gmail.com>
Also remove the superfluous aandcttables dependency from all the modules
that only need it because of mpegvideoenc
Fixes ticket #7333
Signed-off-by: James Almer <jamrial@gmail.com>
And add it to the CONFIGURABLE_COMPONENTS list in Makefile. This way, changes
to the new file will be tracked and the usual warning to suggest re-running
configure will be shown.
Reviewed-by: Rostislav Pehlivanov <atomnuker@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
aom_codec_get_global_headers() is not implemented as of libaom 1.0.0 for AV1, so
we're forced to extract the relevant header OBUs from the first packet and propagate
them as packet side data.
Signed-off-by: James Almer <jamrial@gmail.com>
Lensfun is a library that applies lens correction to an image using a
database of cameras/lenses (you provide the camera and lens models, and
it uses the corresponding database entry's parameters to apply lens
correction). It is licensed under LGPL3.
The lensfun filter utilizes the lensfun library to apply lens
correction to videos as well as images.
This filter was created out of necessity since I wanted to apply lens
correction to a video and the lenscorrection filter did not work for me.
While this filter requires little info from the user to apply lens
correction, the flaw is that lensfun is intended to be used on indvidual
images. When used on a video, the parameters such as focal length is
constant, so lens correction may fail on videos where the camera's focal
length changes (zooming in or out via zoom lens). To use this filter
correctly on videos where such parameters change, timeline editing may
be used since this filter supports it.
Note that valgrind shows a small memory leak which is not from this
filter but from the lensfun library (memory is allocated when loading
the lensfun database but it somehow isn't deallocated even during
cleanup; it is briefly created in the init function of the filter, and
destroyed before the init function returns). This may have been fixed by
the latest commit in the lensfun repository; the current latest release
of lensfun is almost 3 years ago.
Bi-Linear interpolation is used by default as lanczos interpolation
shows more artifacts in the corrected image in my tests.
The lanczos interpolation is derived from lenstool's implementation of
lanczos interpolation. Lenstool is an app within the lensfun repository
which is licensed under GPL3.
v2 of this patch fixes license notice in libavfilter/vf_lensfun.c
v3 of this patch fixes code style and dependency to gplv3 (thanks to
Paul B Mahol for pointing out the mentioned issues).
v4 of this patch fixes more code style issues that were missed in
v3.
v5 of this patch adds line breaks to some of the documentation in
doc/filters.texi (thanks to Gyan Doshi for pointing out the issue).
v6 of this patch fixes more problems (thanks to Moritz Barsnick for
pointing them out).
v7 of this patch fixes use of sqrt() (changed to sqrtf(); thanks to
Moritz Barsnick for pointing this out). Also should be rebased off of
latest master branch commits at this point.
Signed-off-by: Stephen Seo <seo.disparate@gmail.com>
This commit implements a full ATRAC9 decoder, a simple low-delay codec
developed by Sony and used in most PSVita games, some PS3 games and some
PS4 games. Its similar to AAC in that it uses Huffman coded scalefactors
but instead of vector quantization it just Huffman codes the spectral
coefficients (in a way similar to how Opus splits band energy coding
into coarse and fine precision). It opts to write rather large Huffman
codes by packing several small coefficients into one Huffman coded
symbol, though I don't believe this increases efficiency at all.
Band extension implements SBC in a simple way, first it mirrors the
lower spectrum onto the higher frequencies and then it uses one of 5
filters to shape it. Noise substitution is implemented via 2 of them.
Unlike previous ATRAC codecs, there's no QMF, this is a standard MDCT
codec.
Based off of the reverse engineering work of Alex Barney.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
opencl_vaapi_intel_media doesn't depend on libmfx, OpenCL™ Drivers
and Runtimes for Intel® Architectureis is a standalone release, more
information can be found in the link:
https://software.intel.com/en-us/articles/opencl-drivers.
Signed-off-by: Jun Zhao <mypopydev@gmail.com>
This filter does HDR(HDR10/HLG) to SDR conversion with tone-mapping.
An example command to use this filter with vaapi codecs:
FFMPEG -init_hw_device vaapi=va:/dev/dri/renderD128 -init_hw_device \
opencl=ocl@va -hwaccel vaapi -hwaccel_device va -hwaccel_output_format \
vaapi -i INPUT -filter_hw_device ocl -filter_complex \
'[0:v]hwmap,tonemap_opencl=t=bt2020:tonemap=linear:format=p010[x1]; \
[x1]hwmap=derive_device=vaapi:reverse=1' -c:v hevc_vaapi -profile 2 OUTPUT
Signed-off-by: Ruiling Song <ruiling.song@intel.com>