When ffmpeg was streaming, multiple clients were only supported by using a
multicast destination address. An alternative was to stream to a server which
re-distributes the content. This commit adds ZeroMQ as a protocol, which allows
multiple clients to connect to a single ffmpeg instance.
Signed-off-by: Marton Balint <cus@passwd.hu>
This is an alias for JEDEC P22.
The name associated with the value is also changed
from jedec-p22 to ebu3213 to match ITU-T H.273.
Signed-off-by: Raphaël Zumer <rzumer@tebako.net>
Signed-off-by: James Almer <jamrial@gmail.com>
Added linux support for amf encoder through vulkan.
To use h.264(AMD VCE) encoder on linux amdgru-pro version 19.20+ and
amf-amdgpu-pro package(amdgru-pro contains, but does not install
automatically) are required.
This driver can be installed using amdgpu-pro-install script in
official amd driver archive.
Initialization of amf encoder occurs in this order:
1) trying to initialize through dx11(only windows)
2) trying to initialize through dx9(only windows)
3) trying to initialize through vulkan
Only Vulkan initialization available on linux.
Add the support of dehaze filter in existing derain filter source
code. As the processing procedure in FFmpeg is the same for current
derain and dehaze, we reuse the derain filter source code. The
model training and generation scripts are in repo
https://github.com/XueweiMeng/derain_filter.git
Reviewed-by: Steven Liu <lq@onvideo.cn>
Signed-off-by: Xuewei Meng <xwmeng96@gmail.com>
The packet counting based approach caused excessive sdt/pat/pmt for VBR, so
let's use a timestamp based approach instead similar to how we emit PCRs.
SDT/PAT/PMT period should be consistent for both VBR and CBR from now on.
Also change the type of sdt_period and pat_period to AV_OPT_TYPE_DURATION so no
floating point math is necessary.
Fixes ticket #3714.
Signed-off-by: Marton Balint <cus@passwd.hu>
Add the usage of tensorflow model in derain filter. Training scripts
as well as scripts for tf/native model generation are provided in the
repository at https://github.com/XueweiMeng/derain_filter.git.
Reviewed-by: Steven Liu <lq@onvideo.cn>
Signed-off-by: Xuewei Meng <xwmeng96@gmail.com>
These functions can be used to print a variable number of strings consecutively
to the IO context. Unlike av_bprintf, no temporary buffer is necessary.
Signed-off-by: Marton Balint <cus@passwd.hu>
This patch adds a new option to the scale filter which ensures that the
output resolution is divisible by the given integer when used together
with `force_original_aspect_ratio`. This works similar to using `-n` in
the `w` and `h` options.
This option respects the value set for `force_original_aspect_ratio`,
increasing or decreasing the resolution accordingly.
The use case for this is to set a fixed target resolution using `w` and
`h`, to use the `force_original_aspect_ratio` option to make sure that
the video always fits in the defined bounding box regardless of aspect
ratio, but to also make sure that the calculated output resolution is
divisible by n so in can be encoded with certain encoders/options if
that is required.
Signed-off-by: Lars Kiesow <lkiesow@uos.de>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The awnser which most people will seek is put first
Reviewed-by: Thilo Borgmann <thilo.borgmann@mail.de>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Only add sequence end code for mpeg1/mpeg2 video, or else use the encoder
libx264 or libx265 in this sample, decoding the output file will get
unknow NALU type error.
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
Add http_seekable option for HTTP partial requests, when The
EXT-X-BYTERANGE tag indicates that a Media Segment is a sub-range
of the resource identified by its URI, we can use HTTP partial
requests to get the Media Segment.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
Simply moves and templates the actual transforms to support an
additional data type.
Unlike the float version, which is equal or better than libfftw3f,
double precision output is bit identical with libfftw3.
The earlier version had three deficits:
1. It allowed to set the stream to RGB although this is not allowed when
the profile is 0 or 2.
2. If it set the stream to RGB, then it did not automatically set the
range to full range; the result was that one got a warning every time a
frame with color_config element was processed if the frame originally
had TV range and the user didn't explicitly choose PC range. Now one
gets only one warning in such a situation.
3. Intra-only frames in profile 0 are automatically BT.601, but if the
user wished another color space, he was not informed about his wishes
being unfulfillable.
The commit also improves the documentation about this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The dump_extra bitstream filter currently simply adds the extradata to
the packets indicated by the user without checking whether said
extradata already exists in the packets. Besides wasting space
duplicated extradata in the same packet/access unit is also forbidden
for some codecs, e.g. MPEG-2.
This check has been added to be able to use the mpeg2_qsv encoder (which
only adds the sequence headers to the first packet) in broadcast
scenarios where repeating sequence headers are required.
The check used here is not perfect: E.g. dump_extra would add the
extradata to a H.264 access unit consisting of an access unit delimiter,
SPS, PPS and slices.
Fixes#8007.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Changes to vf_drawtext.c written by
Calvin Walton <calvin.walton@kepstin.ca>
Changes to filters.texi written by
greg Luce <electron.rotoscope@gmail.com>
with lots of help from Moritz Barsnick and Gyan
Fixes#7947.
currently master playlist and subtitle playlist creation does not use
temporary files even when temp_file flag is set. Most of the use cases
it is not a problem because master playlist creation happens once on the
beginning of the whole process. But if master playlist is periodically
re-created because of master_pl_refresh_rate is set, non-atomic playlist
creation may cause problems in case of live streaming. This patch
correct this behavior by adding this functionality.
Reviewed-by: Steven Liu <lq@onvideo.cn>
Signed-off-by: Bela Bodecs <bodecsb@vivanet.hu>
When multiple variant streams are specified by var_stream_map option,
%v is expected either in the filename or in the last sub-directory name,
but only in one of them. When both of them contains %v string, current
error message only states half of the truth.
And even %v may appears several times inside the last sub-directory name
or in filename pattern.
This patch clarifies this in the log message and in the doc also.
Signed-off-by: Bela Bodecs <bodecsb@vivanet.hu>
When multiple variant streams are specified by var_stream_map option, %v
placeholder in various names ensures that each variant has its unique
names. Currently %v is substituted by its variant index value (0, 1, 2
etc.). In some use cases it would be handy to specify names for variants
instead of numerical indexes. This patch makes it possible to use names
instead of default indexes. In var_stream_map option each or some of the
variant streams may use an optional name attributum (e.g.
-var_stream_map "v:0,a:0,name:sd v:1,a:1,name:720p") If a name is
specified for a variant, then this name value will be used as
substitution value of %v instead of the default index value.
Signed-off-by: Bela Bodecs <bodecsb@vivanet.hu>
Signed-off-by: Steven Liu <lq@onvideo.cn>
FF_DECODE_ERROR_CONCEALMENT_ACTIVE is set when the decoded frame has error(s) but the returned value from
avcodec_receive_frame is zero i.e. concealed errors
Signed-off-by: Amir Pauker <amir@livelyvideo.tv>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Remove the rain in the input image/video by applying the derain
methods based on convolutional neural networks. Training scripts
as well as scripts for model generation are provided in the
repository at https://github.com/XueweiMeng/derain_filter.git.
Signed-off-by: Xuewei Meng <xwmeng96@gmail.com>
The default is to dump extradata to keyframes, not all frames.
Also improve the description of the relevant AVOption.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Explain that the default Lanczos filter parameter is 3 and that it can be
changed by the param0 option.
Signed-off-by: Werner Robitza <werner.robitza@gmail.com>
ff_filter_get_nb_threads() respect AVFilterContext.nb_threads and
graph->nb_threads both, in most case, we perfer this API than using
ctx->graph->nb_threads directly.
Reviewed-by: Steven Liu <lq@onvideo.cn>
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
This fixes the description of the values for src_range and dst_range to
include the possible values and their meanings.
Signed-off-by: Werner Robitza <werner.robitza@gmail.com>
Signed-off-by: Gyan Doshi <ffmpeg@gyani.pro>
Because the Original repository author loss loss of communication,
add new sr filter script repository.
Thanks to Gyan Doshi for a suggestion.
Signed-Off-By: Steven Liu <lq@chinaffmpeg.org>
Directed to the AviSynth+ entry on AviSynth Wiki rather than to
the github repository, since the wiki page is both more informative
and has the relevant Git/download links. The github releases page
is little more than a changelog.
Cleanup the applehttp as demuxer name, when use the command :
ffmpeg -formats, get the confused information like:
"
E hls Apple HTTP Live Streaming
D hls,applehttp Apple HTTP Live Streaming
"
we don't use applehttp as the demuxer/muxer name usually, so
cleanup the applehttp and update the documents.
After the change, get the information from "ffmpeg -formats":
"
DE hls Apple HTTP Live Streaming
"
Reviewed-by: Steven Liu <lq@onvideo.cn>
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
Fix ticket #7297
The current setting for send-expect-100 option is either
enabled if applicable or forced enabled, no option to force
disable the header. This change is to expand the option setting
to provide more flexibility, which is useful for rstp case.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The lensfun filter wraps the lensfun library which performs
transformations on videos to correct for lens distortion. Often this
results in areas in the input being mapped to areas that fall outside
the boundaries of the output. The library has a parameter called scale
which is a scale factor applied to the output video. By decreasing it it
is possible to regain the areas of the video which would otherwise have
been lost. There is a special value of 0 which indicates that the
library should automatically determine a scale factor that results in
the output frame being filled (i.e. little or no black/unmapped areas).
This patch adds a corresponding scale option to the lensfun filter which
is passed through to the library. The existing behaviour of using the
automatic value of 0 is retained as the default behaviour, while other
values will be passed through to the library.
Signed-off-by: Daniel Playfair Cal <daniel.playfair.cal@gmail.com>
Nobody is going to check the queue anymore, so users must now subscribe to
send messages to ffmpeg-devel. This will prevent orphaned/ignored messages
from rotting in the abandoned queue. This matches the behavior of ffmpeg-user
and libav-user.
Also, this addresses some other nits.
Signed-off-by: Lou Logan <lou@lrcd.com>
set_metadata with many entries is not very efficient, and with small audio
frames the performance loss is noticable. Also with this very simple
calculations (like peak) can be even further optimized.
Unfoturnately there are some small differences in metadata and av_log info
output, so factorizing calculations and output might not worth the hassle.
Signed-off-by: Marton Balint <cus@passwd.hu>
This removes lots of code duplication and also allows more complex specifiers,
for example you can use p:204:aⓂ️language:eng to select the English language
audio stream from program 204.
Signed-off-by: Marton Balint <cus@passwd.hu>
* Outputs ASS lines with basic coloring and font scaling for each
given region.
* Sets the default style to the resolution of the subtitle plane
(for example, 960x540 / 36pt font for profile A).
* Has options to:
* Disable ruby text (which is coded as regions which have
half-height text in libaribb24).
Enabled by default as without positioning ruby text only
confuses as it is usually coded in the beginning of the decoded
subtitle line.
* Set the working directory, in which libaribb24 will read
configuration as well as into which it may save broadcast extra
symbols as PNG.
Unset by default.
The unconventional library check can be explained by the library's
current master branch being licensed as LGPLv3, but at the time of
writing the latest official release is still licensed under GPLv3.
Thus, one either has to wait for the following release, or enable
GPLv3.
This attaches the logic of picking the mode of for the next picture to
the output, which simplifies some choices by removing the concept of
the picture for which input is not yet available. At the same time,
we allow more complex reference structures and track more reference
metadata (particularly the contents of the DPB) for use in the
codec-specific code.
It also adds flags to explicitly track the available features of the
different codecs. The new structure also allows open-GOP support, so
that is now available for codecs which can do it.
The encoders such as libx264 support different QPs offset for different MBs,
it makes possible for ROI-based encoding. It makes sense to add support
within ffmpeg to generate/accept ROI infos and pass into encoders.
Typical usage: After AVFrame is decoded, a ffmpeg filter or user's code
generates ROI info for that frame, and the encoder finally does the
ROI-based encoding.
The ROI info is maintained as side data of AVFrame.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
This commit adds configuration options to libvpxenc.c that can be used to
tune the sharpness parameter for VP8 and VP9.
Signed-off-by: Rene Claus <rclaus@google.com>
Signed-off-by: James Zern <jzern@google.com>
Apple doesn't have an official spec for LHLS. Meanwhile hls.js player folks are
trying to standardize a open LHLS spec. The draft spec is available in https://github.com/video-dev/hlsjs-rfcs/blob/lhls-spec/proposals/0001-lhls.md
This option will also try to comply with the above open spec, till Apple's spec officially supports it.
Applicable only when @var{streaming} and @var{hls_playlist} options are enabled.
When dashenc has to run for long duration(say 24x7 live stream), one can enable this option to ignore the io failure of few segment's upload due to an intermittent network issues.
When the network connection recovers dashenc will continue with the upload of the current segments, leading to the recovery of the stream.
The file name template options now support a new "$ext$" placeholder,
which is replaced with a filename extension specific for the selected
file format. This is useful for the new "auto" format mode, when
different streams may use different file formats, and it is not
possible to specify the correct file name extension exactly.
Resolves warnings in the log about webm segments not having webm extensions.
This commit restores the ability to create DASH streams with codecs
that require different containers that was lost after commit
2efdbf7367. It adds a new "auto" value for
the dash_segment_type option and makes it the default. When in this mode,
the segment format will be chosen based on the codec used in the stream:
webm for Vorbis, Opus, VP8 or VP9, mp4 otherwise.
This commit adds configuration options to libvpxenc.c that can be used to
enable VP8 temporal scalability. It also adds a way to programmatically set the
per-frame encoding flags which can be used to control usage and updates of
reference frames while encoding with temporal scalability enabled.
Signed-off-by: James Zern <jzern@google.com>
This is a cuda implementation of yadif, which gives us a way to
do deinterlacing when using the nvdec hwaccel. In that scenario
we don't have access to the nvidia deinterlacer.
This fixes the grammar of two HLS option descriptions and makes them less
ambiguous.
Signed-off-by: Werner Robitza <werner.robitza@gmail.com>
Signed-off-by: Lou Logan <lou@lrcd.com>
This is needed because of 32bit float formats (which are difficult to
store in 16bits)
This also fixes undefined behavior found by fate
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This adds common code to query driver support and set appropriate
address/size information for each slice. It only supports rectangular
slices for now, since that is the most common use-case.
Several SRT options are missing. Since pkg_config requires libsrt v1.3.0 and above, it should be able to support options added in libsrt v1.3.0 and below.
This commit adds 8 SRT options.
sndbuf, rcvbuf, lossmaxttl, minversion, streamid, smoother, messageapi and transtype
The keys of option are equivalent to stransmit.
https://github.com/Haivision/srt/blob/v1.3.0/apps/socketoptions.hpp#L196-L223
Signed-off-by: Marton Balint <cus@passwd.hu>
Allows arrangement of multiple windows such as:
ffmpeg -re -f lavfi -i mandelbrot -f sdl -window_x 1 -window_y 1 mandelbrot -vf waveform,format=yuv420p -f sdl -window_x 641 -window_y 1 waveform -vf vectorscope,format=yuv420p -f sdl -window_x 1 -window_y 481 vectorscop
Some changes by Marton Balint:
- allow negative position (partially or fully out-of-screen positions seem to
be sanitized automatically by SDL (or my WM?), so no special handling is
needed)
- only show window after the position is set
- do not use resizable and borderless flags at the same time, that caused
issues in ffplay
- add docs
Signed-off-by: Marton Balint <cus@passwd.hu>
The existing av_mediacodec_release_buffer allows the user to render
or discard the Surface-backed frame. This new method allows the user
to control exactly when the frame will be rendered to its SurfaceView.
Available since Android API 21.
Signed-off-by: Aman Gupta <aman@tmm1.net>
This allows switching between absolute (LUFS) and relativ (LU) display
in the status line.
Signed-off-by: Daniel Molkentin <daniel@molkentin.de>
Signed-off-by: Conrad Zelck <c.zelck@imail.de>
This eases meeting the target level during live mixing.
Signed-off-by: Daniel Molkentin <daniel@molkentin.de>
Signed-off-by: Conrad Zelck <c.zelck@imail.de>
Allow to show short-term instead of momentary in gauge. Useful for monitoring
whilst live mixing.
Signed-off-by: Daniel Molkentin <daniel@molkentin.de>
Signed-off-by: Conrad Zelck <c.zelck@imail.de>
This allows getting data only from a specific source IP. This is useful not
only for unicast but for multicast as well because multicast source
subscriptions do not act as source filters for the incoming packets.
Signed-off-by: Marton Balint <cus@passwd.hu>
This option is useful for maintaining input synchronization across N
different hardware devices deployed for 'N-way' redundancy.
The system time of different hardware devices should be synchronized
with protocols such as NTP or PTP, before using this option.
Signed-off-by: Marton Balint <cus@passwd.hu>