This removes lots of code duplication and also allows more complex specifiers,
for example you can use p:204:aⓂ️language:eng to select the English language
audio stream from program 204.
Signed-off-by: Marton Balint <cus@passwd.hu>
* Outputs ASS lines with basic coloring and font scaling for each
given region.
* Sets the default style to the resolution of the subtitle plane
(for example, 960x540 / 36pt font for profile A).
* Has options to:
* Disable ruby text (which is coded as regions which have
half-height text in libaribb24).
Enabled by default as without positioning ruby text only
confuses as it is usually coded in the beginning of the decoded
subtitle line.
* Set the working directory, in which libaribb24 will read
configuration as well as into which it may save broadcast extra
symbols as PNG.
Unset by default.
The unconventional library check can be explained by the library's
current master branch being licensed as LGPLv3, but at the time of
writing the latest official release is still licensed under GPLv3.
Thus, one either has to wait for the following release, or enable
GPLv3.
This attaches the logic of picking the mode of for the next picture to
the output, which simplifies some choices by removing the concept of
the picture for which input is not yet available. At the same time,
we allow more complex reference structures and track more reference
metadata (particularly the contents of the DPB) for use in the
codec-specific code.
It also adds flags to explicitly track the available features of the
different codecs. The new structure also allows open-GOP support, so
that is now available for codecs which can do it.
The encoders such as libx264 support different QPs offset for different MBs,
it makes possible for ROI-based encoding. It makes sense to add support
within ffmpeg to generate/accept ROI infos and pass into encoders.
Typical usage: After AVFrame is decoded, a ffmpeg filter or user's code
generates ROI info for that frame, and the encoder finally does the
ROI-based encoding.
The ROI info is maintained as side data of AVFrame.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
This commit adds configuration options to libvpxenc.c that can be used to
tune the sharpness parameter for VP8 and VP9.
Signed-off-by: Rene Claus <rclaus@google.com>
Signed-off-by: James Zern <jzern@google.com>
Apple doesn't have an official spec for LHLS. Meanwhile hls.js player folks are
trying to standardize a open LHLS spec. The draft spec is available in https://github.com/video-dev/hlsjs-rfcs/blob/lhls-spec/proposals/0001-lhls.md
This option will also try to comply with the above open spec, till Apple's spec officially supports it.
Applicable only when @var{streaming} and @var{hls_playlist} options are enabled.
When dashenc has to run for long duration(say 24x7 live stream), one can enable this option to ignore the io failure of few segment's upload due to an intermittent network issues.
When the network connection recovers dashenc will continue with the upload of the current segments, leading to the recovery of the stream.
The file name template options now support a new "$ext$" placeholder,
which is replaced with a filename extension specific for the selected
file format. This is useful for the new "auto" format mode, when
different streams may use different file formats, and it is not
possible to specify the correct file name extension exactly.
Resolves warnings in the log about webm segments not having webm extensions.
This commit restores the ability to create DASH streams with codecs
that require different containers that was lost after commit
2efdbf7367. It adds a new "auto" value for
the dash_segment_type option and makes it the default. When in this mode,
the segment format will be chosen based on the codec used in the stream:
webm for Vorbis, Opus, VP8 or VP9, mp4 otherwise.
This commit adds configuration options to libvpxenc.c that can be used to
enable VP8 temporal scalability. It also adds a way to programmatically set the
per-frame encoding flags which can be used to control usage and updates of
reference frames while encoding with temporal scalability enabled.
Signed-off-by: James Zern <jzern@google.com>
This is a cuda implementation of yadif, which gives us a way to
do deinterlacing when using the nvdec hwaccel. In that scenario
we don't have access to the nvidia deinterlacer.
This fixes the grammar of two HLS option descriptions and makes them less
ambiguous.
Signed-off-by: Werner Robitza <werner.robitza@gmail.com>
Signed-off-by: Lou Logan <lou@lrcd.com>
This is needed because of 32bit float formats (which are difficult to
store in 16bits)
This also fixes undefined behavior found by fate
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This adds common code to query driver support and set appropriate
address/size information for each slice. It only supports rectangular
slices for now, since that is the most common use-case.
Several SRT options are missing. Since pkg_config requires libsrt v1.3.0 and above, it should be able to support options added in libsrt v1.3.0 and below.
This commit adds 8 SRT options.
sndbuf, rcvbuf, lossmaxttl, minversion, streamid, smoother, messageapi and transtype
The keys of option are equivalent to stransmit.
https://github.com/Haivision/srt/blob/v1.3.0/apps/socketoptions.hpp#L196-L223
Signed-off-by: Marton Balint <cus@passwd.hu>
Allows arrangement of multiple windows such as:
ffmpeg -re -f lavfi -i mandelbrot -f sdl -window_x 1 -window_y 1 mandelbrot -vf waveform,format=yuv420p -f sdl -window_x 641 -window_y 1 waveform -vf vectorscope,format=yuv420p -f sdl -window_x 1 -window_y 481 vectorscop
Some changes by Marton Balint:
- allow negative position (partially or fully out-of-screen positions seem to
be sanitized automatically by SDL (or my WM?), so no special handling is
needed)
- only show window after the position is set
- do not use resizable and borderless flags at the same time, that caused
issues in ffplay
- add docs
Signed-off-by: Marton Balint <cus@passwd.hu>
The existing av_mediacodec_release_buffer allows the user to render
or discard the Surface-backed frame. This new method allows the user
to control exactly when the frame will be rendered to its SurfaceView.
Available since Android API 21.
Signed-off-by: Aman Gupta <aman@tmm1.net>
This allows switching between absolute (LUFS) and relativ (LU) display
in the status line.
Signed-off-by: Daniel Molkentin <daniel@molkentin.de>
Signed-off-by: Conrad Zelck <c.zelck@imail.de>
This eases meeting the target level during live mixing.
Signed-off-by: Daniel Molkentin <daniel@molkentin.de>
Signed-off-by: Conrad Zelck <c.zelck@imail.de>
Allow to show short-term instead of momentary in gauge. Useful for monitoring
whilst live mixing.
Signed-off-by: Daniel Molkentin <daniel@molkentin.de>
Signed-off-by: Conrad Zelck <c.zelck@imail.de>
This allows getting data only from a specific source IP. This is useful not
only for unicast but for multicast as well because multicast source
subscriptions do not act as source filters for the incoming packets.
Signed-off-by: Marton Balint <cus@passwd.hu>
This option is useful for maintaining input synchronization across N
different hardware devices deployed for 'N-way' redundancy.
The system time of different hardware devices should be synchronized
with protocols such as NTP or PTP, before using this option.
Signed-off-by: Marton Balint <cus@passwd.hu>
Also bump the API version requirement to 10.9.5, because on olders versions
there were some reports of crashes using the undocumented, yet available
BMDDeckLinkDeviceHandle.
Signed-off-by: Marton Balint <cus@passwd.hu>
Sets the level based on the stream properties if it is not explicitly
set by the user. Also add a tier option to set general_tier_flag, since
that affects the level choice.
This was added in libva 2.1.0 (VAAPI 1.1.0). Use AVCodecContext.qmax,
matching the existing behaviour for qmin, and clean up the defaults so
that we only pass min/max when explicitly set.
Query which modes are supported and select between VBR and CBR based
on that - this removes all of the codec-specific rate control mode
selection code.
Previously there was one fixed choice for each codec (e.g. H.265 -> Main
profile), and using anything else then required an explicit option from
the user. This changes to selecting the profile based on the input format
and the set of profiles actually supported by the driver (e.g. P010 input
will choose Main 10 profile for H.265 if the driver supports it).
The entrypoint and render target format are also chosen dynamically in the
same way, removing those explicit selections from the per-codec code.
Create a new AVPacket side data type for Active Format Description,
which mirrors the side data type found in AVFrame. The primary
use case for this is ensuring AFD gets preserved in the V210
encoder, so that the decklink libavdevice can output AFD.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
Also make sure we set the URL context max packet size accordingly.
Based on a patch by Tudor Suciu <tudor.suciu@gmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
AV_CODEC_FLAG_GLOBAL_HEADER should be set before calling avcodec_open2() to have any effect.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
ISMV lacks any sort of edit list support, as well as tfxd is
effectively the PTS of the fragment for most intents and purposes.
Thus, if b-frames are requested without negative CTS offsets you
end up with N frames' worth of delay (tfxd PTS plus the CTS offset
of the first sample). Negative CTS offsets enable the first sample
to have CTS=DTS, and thus a/v desync due to b-frame reorder delay
is avoided.
Existing link is broken.
This patch updates the existing url with a working one.
Signed-off-by: Mina <minasamy_@hotmail.com>
Signed-off-by: Gyan Doshi <ffmpeg@gyani.pro>
Lensfun is a library that applies lens correction to an image using a
database of cameras/lenses (you provide the camera and lens models, and
it uses the corresponding database entry's parameters to apply lens
correction). It is licensed under LGPL3.
The lensfun filter utilizes the lensfun library to apply lens
correction to videos as well as images.
This filter was created out of necessity since I wanted to apply lens
correction to a video and the lenscorrection filter did not work for me.
While this filter requires little info from the user to apply lens
correction, the flaw is that lensfun is intended to be used on indvidual
images. When used on a video, the parameters such as focal length is
constant, so lens correction may fail on videos where the camera's focal
length changes (zooming in or out via zoom lens). To use this filter
correctly on videos where such parameters change, timeline editing may
be used since this filter supports it.
Note that valgrind shows a small memory leak which is not from this
filter but from the lensfun library (memory is allocated when loading
the lensfun database but it somehow isn't deallocated even during
cleanup; it is briefly created in the init function of the filter, and
destroyed before the init function returns). This may have been fixed by
the latest commit in the lensfun repository; the current latest release
of lensfun is almost 3 years ago.
Bi-Linear interpolation is used by default as lanczos interpolation
shows more artifacts in the corrected image in my tests.
The lanczos interpolation is derived from lenstool's implementation of
lanczos interpolation. Lenstool is an app within the lensfun repository
which is licensed under GPL3.
v2 of this patch fixes license notice in libavfilter/vf_lensfun.c
v3 of this patch fixes code style and dependency to gplv3 (thanks to
Paul B Mahol for pointing out the mentioned issues).
v4 of this patch fixes more code style issues that were missed in
v3.
v5 of this patch adds line breaks to some of the documentation in
doc/filters.texi (thanks to Gyan Doshi for pointing out the issue).
v6 of this patch fixes more problems (thanks to Moritz Barsnick for
pointing them out).
v7 of this patch fixes use of sqrt() (changed to sqrtf(); thanks to
Moritz Barsnick for pointing this out). Also should be rebased off of
latest master branch commits at this point.
Signed-off-by: Stephen Seo <seo.disparate@gmail.com>
Set make variable KEEP to non-zero value to preserve temp files
when a test has passed.
Helpful in diagnosing failed tests when test outfile is some type of
single hash and does not reveal differences in processed output.
A generic lavf flag for AAC LATM packetization for the RTP muxer was
added in ef409645f0 and then made inert 20 days later in 0832122880
when a private muxer option was added and the generic flag no longer
read.
If the user provides a valid timecode_format look for timecode of that
format in the capture and if found store it on the video avstream's
metadata.
Slightly modified by Marton Balint to capture per-frame timecode as well.
Signed-off-by: Marton Balint <cus@passwd.hu>
HMS is formatted as HH:MM:SS.mmm, but, HH part is not limited to
24 hours. For example, the the drawn text may look like this:
243029:20:30.342. To present the timestamp in more readable and
user friendly format, this patch provides an additional option
to limit the hour part in the range 0-23.
Note: Actually the above required format can be obtained with
format options 'localtime' and 'gmtime', but, milliseconds part
is not supported in those formats.
The producer reference time box supplies relative wall-clock times
at which movie fragments, or files containing movie fragments
(such as segments) were produced.
The box is mainly useful in live streaming use cases. A media player
can parse the box and utilize the time fields to measure and improve
the latency during real time playout.
Right now segment file format is chosen to be either mp4 or webm based on the codec format.
This patch makes that choice configurable by the user, instead of being decided by the muxer.
Also with this change per-stream choice segment file format(based on codec type) is not possible.
All the output audio and video streams should be in the same file format.
This new optional flag makes it easier to deal with mpegts
samples where the PMT is updated and elementary streams move
to different PIDs in the middle of playback.
Previously, new AVStreams were created per PID, and it was up
to the user to figure out which streams had migrated to a new PID
(by iterating over the list of AVProgram and making guesses), and
switch seamlessly to the new AVStream during playback.
Transcoding or remuxing these streams with ffmpeg on the CLI was
also quite painful, and the user would need to extract each set
of PIDs into a separate file and then stitch them back together.
With this new option, the mpegts demuxer will automatically detect
PMT changes and feed data from the new PID to the original AVStream
that was created for the orignal PID. For mpegts samples with
stream_identifier_descriptor available, the unique ID is used to
merge PIDs together. If the stream id is not available, the demuxer
attempts to map PIDs based on their position within the PMT.
With this change, I am able to playback and transcode/remux these
two samples which previously caused issues:
https://tmm1.s3.amazonaws.com/pmt-version-change.tshttps://kuroko.fushizen.eu/videos/pid_switch_sample.ts
I also have another longer sample in which the PMT changes
repeatedly and ES streams move to different pids three times
during playback:
https://tmm1.s3.amazonaws.com/multiple-pmt-change.ts
Demuxing this sample with the new option shows several new log
messages as the PMT changes are handled:
[mpegts] detected PMT change (program=1, version=3/6, pcr_pid=0xf98/0xfb7)
[mpegts] re-using existing video stream 0 (pid=0xf98) for new pid=0xfb7
[mpegts] re-using existing audio stream 1 (pid=0xf99) for new pid=0xfb8
[mpegts] re-using existing audio stream 2 (pid=0xf9a) for new pid=0xfb9
[mpegts] detected PMT change (program=1, version=6/3, pcr_pid=0xfb7/0xf98)
[mpegts] detected PMT change (program=1, version=3/4, pcr_pid=0xf98/0xf9b)
[mpegts] re-using existing video stream 0 (pid=0xf98) for new pid=0xf9b
[mpegts] re-using existing audio stream 1 (pid=0xf99) for new pid=0xf9c
[mpegts] re-using existing audio stream 2 (pid=0xf9a) for new pid=0xf9d
[mpegts] detected PMT change (program=1, version=4/5, pcr_pid=0xf9b/0xfa9)
[mpegts] re-using existing video stream 0 (pid=0xf98) for new pid=0xfa9
[mpegts] re-using existing audio stream 1 (pid=0xf99) for new pid=0xfaa
[mpegts] re-using existing audio stream 2 (pid=0xf9a) for new pid=0xfab
[mpegts] detected PMT change (program=1, version=5/6, pcr_pid=0xfa9/0xfb7)
Signed-off-by: Aman Gupta <aman@tmm1.net>
These fields will allow the mpegts demuxer to expose details about
the PMT/program which created the AVProgram and its AVStreams.
In mpegts, a PMT which advertises streams has a version number
which can be incremented at any time. When the version changes,
the pids which correspond to each of it's streams can also change.
Since ffmpeg creates a new AVStream per pid by default, an API user
needs the ability to (a) detect when the PMT changed, and (b) tell
which AVStream were added to replace earlier streams.
This has been a long-standing issue with ffmpeg's handling of mpegts
streams with PMT changes, and I found two related patches in the wild
that attempt to solve the same problem:
The first is in MythTV's ffmpeg fork, where they added a
void (*streams_changed)(void*); to AVFormatContext and call it from
their fork of the mpegts demuxer whenever the PMT changes.
The second was proposed by XBMC in
https://ffmpeg.org/pipermail/ffmpeg-devel/2012-December/135036.html,
where they created a new AVMEDIA_TYPE_DATA stream with id=0 and
attempted to send packets to it whenever the PMT changed.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Some filtered mpegts streams may erroneously include PMTs for
programs that are not advertised in the PAT. This confuses ffmpeg
and most players because multiple audio/video streams are created
and it is unclear which ones actually contain data.
See for example https://tmm1.s3.amazonaws.com/unknown-pmts.ts
In this sample, the PAT advertises exactly one program. But the
pid it points to for the program's PMT contains PMTs for other
programs as well. This is because the broadcaster decided to
re-use the same pid for multiple program PMTs.
The hardware that filtered the original multi-program stream
into a single-program stream did so by rewriting the PAT to
contain only the program that was requested. But since it just
passed through the PMT pid referenced in the PAT, multiple PMTs
are still present for the other programs.
Before:
Input #0, mpegts, from 'unknown-pmts.ts':
Duration: 00:00:10.11, start: 80741.189700, bitrate: 9655 kb/s
Program 4
Stream #0:2[0x41]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], Closed Captions, 11063 kb/s, 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc
Stream #0:3[0x44](eng): Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, 5.1(side), fltp, 384 kb/s
Stream #0:4[0x45](spa): Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, fltp, 128 kb/s
No Program
Stream #0:0[0x31]: Video: mpeg2video ([2][0][0][0] / 0x0002), none(tv), 90k tbr, 90k tbn, 90k tbc
Stream #0:1[0x34](eng): Audio: ac3 (AC-3 / 0x332D4341), 0 channels, fltp
Stream #0:5[0x51]: Video: mpeg2video ([2][0][0][0] / 0x0002), none, 90k tbr, 90k tbn
Stream #0:6[0x54](eng): Audio: ac3 (AC-3 / 0x332D4341), 0 channels
With skip_unknown_pmt=1:
Input #0, mpegts, from 'unknown-pmts.ts':
Duration: 00:00:10.11, start: 80741.189700, bitrate: 9655 kb/s
Program 4
Stream #0:0[0x41]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], Closed Captions, 11063 kb/s, 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc
Stream #0:1[0x44](eng): Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, 5.1(side), fltp, 384 kb/s
Stream #0:2[0x45](spa): Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, fltp, 128 kb/s
Signed-off-by: Aman Gupta <aman@tmm1.net>
Generates color bar test patterns based on EBU PAL recommendations.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Tobias Rapp <t.rapp@noa-archive.com>
The FATE tests for MSVC versions older than 2013 are untested in FATE
and apparently are no longer supported.
This commit makes the configure process error out in case an older version
is used, and suggests to use a supported version of MSVC to compile.
This also changes the documentation to reflect this.
As discussed on IRC:
2018-05-12 19:45:16 jamrial then again, most of those were for old msvc, and i think we're not supporting versions older than 2013 (first one c99 compliant) anymore
2018-05-12 19:45:43 +JEEB yea, I think 2013 update 2 is needed
22:53 <@atomnuker> nevcairiel: which commit broke/unsupported support for msvc 2013?
23:23 <@atomnuker> okay, it was JEEB
23:25 <+JEEB> which was for 2012 and older
23:25 <+JEEB> and IIRC we no longer test those in FATE so that was my assumption
23:26 <+JEEB> 2013 is when MS got trolled enough to actually update their C part
23:26 <+JEEB> aand actually advertised FFmpeg support
23:26 <+JEEB> (although it was semi-failing until VS2013 update 1 or 2)
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
Parses the video_stream_descriptor (H.222 2.6.2) to look
for the still_picture_flag. This is exposed to the user
via a new AV_DISPOSITION_STILL_IMAGE.
See for example https://tmm1.s3.amazonaws.com/music-choice.ts,
whose video stream only updates every ~6 seconds.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Most decoders (pgssubdec, ccaption_dec) are using -1 or UINT32_MAX for a
subtitle event which should be cleared at the next event.
Signed-off-by: Marton Balint <cus@passwd.hu>