It is unnecessary as the number of static inputs and outputs can now
be directly read via AVFilter.nb_(in|out)puts.
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It is intended as replacement for avfilter_pad_count(). In contrast to
the latter, it avoids a loop.
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In particular, document that av_opt_copy() always disentangles
allocated options even on error; this guarantee is needed to e.g.
properly free duplicated thread contexts in libavcodec on error.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The reason why the generic av_image_copy_uc_from() doesn't really
fit in the case for Vulkan is because some planes may be copied via
other methods (such as mapping GPU memory), and if they don't satisfy
the strict alignment requirements, a gpu image->gpu buffer->cpu ram
copy is performed.
We need this for hwcontext_vulkan, and I think this will also be
useful to API users like libplacebo who would rather not write
a custom SIMD memcpy.
common.h currently contains several things: Math macros, UTF-8 macros,
other fundamental macros; furthermore it also contains miscellaneous
math functions and it (directly and indirectly) includes lots of other
headers.
This commit moves the "other fundamental macros" to macros.h which is
a more fitting place.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In dd770883e9, support for expressions was added. Among the constants
added were labels of qnstc, qpal, sntsc & spal.
These were added in ba2a8cb40b to represent parameter permutations where
only the resolution is different. They don't have any usage currency and
don't represent any industry standards or convention in terms of framerate.
These bits are reserved in earlier versions of the H.264 spec, and
some poor hardware decoders require they are zero. Thus, it is useful
to be able to zero these on streams that may have them set. The result
is still a valid H.264 bitstream.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Support single input for guided filter by adding guidance mode.
If the guidance mode is off, single input is required. And
edge-preserving smoothing is conducted. If the mode is on, two
inputs are needed. The second input serves as the guidance. For
this mode, more tasks are supported, such as detail enhancement,
dehazing and so on.
Signed-off-by: Xuewei Meng <xwmeng96@gmail.com>
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
HDR10+ metadata is stored in the bit stream for HEVC. The story is
different for VP9 and cannot store the metadata in the bit stream.
HDR10+ should be passed to packet side data an stored in the container
(mkv) for VP9.
This CL is taking HDR10+ from AVFrame side data in libvpxenc and is
passing it to the AVPacket side data.
Reviewed-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Signed-off-by: James Zern <jzern@google.com>
Broken in 753930bc73, as the path to
Doxyfile passed to doxy-wrapper.sh is relative to the build dir, while
the recipe cd's to the source dir before invoking the wrapper.
It can be useful to library users, and is currently being used by ffmpeg.c
Suggested-by: Hendrik Leppkes <h.leppkes@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
The user should not rely on all options always being recognized
(in particular not on error).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This feature can be used with dnn detection by setting vf_drawtext's option
text_source=side_data_detection_bboxes, for example:
./ffmpeg -i face.jpeg -vf dnn_detect=dnn_backend=openvino:model=face-detection-adas-0001.xml:\
input=data:output=detection_out:labels=face-detection-adas-0001.label,drawbox=box_source=
side_data_detection_bboxes,drawtext=text_source=side_data_detection_bboxes:fontcolor=green:\
fontsize=40, -y face_detect.jpeg
Please note, the default fontsize of vf_drawtext is 12, which may be too
small to be seen clearly.
Signed-off-by: Ting Fu <ting.fu@intel.com>
This feature can be used with dnn detection by setting vf_drawbox's
option box_source=side_data_detection_bboxes, for example:
./ffmpeg -i face.jpeg -vf dnn_detect=dnn_backend=openvino:model=face-detection-adas-0001.xml:\
input=data:output=detection_out:labels=face-detection-adas-0001.label,\
drawbox=box_source=side_data_detection_bboxes -y face_detect.jpeg
Signed-off-by: Ting Fu <ting.fu@intel.com>
With some minor changes by Marton Balint:
- removed trailing whitespace
- fixed network_descriptors_length
- fixed reserved_future_use flag in the start of the section
- removed unused program variable
- emit first NIT after PAT
- some other cosmetics
Signed-off-by: Ubaldo Porcheddu <ubaldo@eja.it>
Signed-off-by: Marton Balint <cus@passwd.hu>
Two modes are supported in guided filter, basic mode and fast mode.
Basic mode is the initial pushed guided filter without optimization.
Fast mode is implemented based on the basic one by sub-sampling method.
The sub-sampling ratio which can be defined by users controls the
algorithm complexity. The larger the sub-sampling ratio, the lower
the algorithm complexity.
Signed-off-by: Xuewei Meng <xwmeng96@gmail.com>
Reviewed-by: Steven Liu <liuqi05@kuaishou.com>
commit 95b854dd06 "rename sum option to normalize" missed command
part docs
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
Reviewed-by: Gyan Doshi <ffmpeg@gyani.pro>
Add examples on how to use this filter, and improve the code style.
Implement the slice-level parallelism for guided filter.
Add the basic version of guided filter.
Signed-off-by: Xuewei Meng <xwmeng96@gmail.com>
Reviewed-by: Steven Liu <liuqi05@kuaishou.com>
classification is done on every detection bounding box in frame's side data,
which are the results of object detection (filter dnn_detect).
Please refer to commit log of dnn_detect for the material for detection,
and see below for classification.
- download material for classifcation:
wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/emotions-recognition-retail-0003.bin
wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/emotions-recognition-retail-0003.xml
wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/emotions-recognition-retail-0003.label
- run command as:
./ffmpeg -i cici.jpg -vf dnn_detect=dnn_backend=openvino:model=face-detection-adas-0001.xml:input=data:output=detection_out:confidence=0.6:labels=face-detection-adas-0001.label,dnn_classify=dnn_backend=openvino:model=emotions-recognition-retail-0003.xml:input=data:output=prob_emotion:confidence=0.3:labels=emotions-recognition-retail-0003.label:target=face,showinfo -f null -
We'll see the detect&classify result as below:
[Parsed_showinfo_2 @ 0x55b7d25e77c0] side data - detection bounding boxes:
[Parsed_showinfo_2 @ 0x55b7d25e77c0] source: face-detection-adas-0001.xml, emotions-recognition-retail-0003.xml
[Parsed_showinfo_2 @ 0x55b7d25e77c0] index: 0, region: (1005, 813) -> (1086, 905), label: face, confidence: 10000/10000.
[Parsed_showinfo_2 @ 0x55b7d25e77c0] classify: label: happy, confidence: 6757/10000.
[Parsed_showinfo_2 @ 0x55b7d25e77c0] index: 1, region: (888, 839) -> (967, 926), label: face, confidence: 6917/10000.
[Parsed_showinfo_2 @ 0x55b7d25e77c0] classify: label: anger, confidence: 4320/10000.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>