When dashenc has to run for long duration(say 24x7 live stream), one can enable this option to ignore the io failure of few segment's upload due to an intermittent network issues.
When the network connection recovers dashenc will continue with the upload of the current segments, leading to the recovery of the stream.
The file name template options now support a new "$ext$" placeholder,
which is replaced with a filename extension specific for the selected
file format. This is useful for the new "auto" format mode, when
different streams may use different file formats, and it is not
possible to specify the correct file name extension exactly.
Resolves warnings in the log about webm segments not having webm extensions.
This commit restores the ability to create DASH streams with codecs
that require different containers that was lost after commit
2efdbf7367. It adds a new "auto" value for
the dash_segment_type option and makes it the default. When in this mode,
the segment format will be chosen based on the codec used in the stream:
webm for Vorbis, Opus, VP8 or VP9, mp4 otherwise.
This commit adds configuration options to libvpxenc.c that can be used to
enable VP8 temporal scalability. It also adds a way to programmatically set the
per-frame encoding flags which can be used to control usage and updates of
reference frames while encoding with temporal scalability enabled.
Signed-off-by: James Zern <jzern@google.com>
This is a cuda implementation of yadif, which gives us a way to
do deinterlacing when using the nvdec hwaccel. In that scenario
we don't have access to the nvidia deinterlacer.
This fixes the grammar of two HLS option descriptions and makes them less
ambiguous.
Signed-off-by: Werner Robitza <werner.robitza@gmail.com>
Signed-off-by: Lou Logan <lou@lrcd.com>
This is needed because of 32bit float formats (which are difficult to
store in 16bits)
This also fixes undefined behavior found by fate
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This adds common code to query driver support and set appropriate
address/size information for each slice. It only supports rectangular
slices for now, since that is the most common use-case.
Several SRT options are missing. Since pkg_config requires libsrt v1.3.0 and above, it should be able to support options added in libsrt v1.3.0 and below.
This commit adds 8 SRT options.
sndbuf, rcvbuf, lossmaxttl, minversion, streamid, smoother, messageapi and transtype
The keys of option are equivalent to stransmit.
https://github.com/Haivision/srt/blob/v1.3.0/apps/socketoptions.hpp#L196-L223
Signed-off-by: Marton Balint <cus@passwd.hu>
Allows arrangement of multiple windows such as:
ffmpeg -re -f lavfi -i mandelbrot -f sdl -window_x 1 -window_y 1 mandelbrot -vf waveform,format=yuv420p -f sdl -window_x 641 -window_y 1 waveform -vf vectorscope,format=yuv420p -f sdl -window_x 1 -window_y 481 vectorscop
Some changes by Marton Balint:
- allow negative position (partially or fully out-of-screen positions seem to
be sanitized automatically by SDL (or my WM?), so no special handling is
needed)
- only show window after the position is set
- do not use resizable and borderless flags at the same time, that caused
issues in ffplay
- add docs
Signed-off-by: Marton Balint <cus@passwd.hu>
The existing av_mediacodec_release_buffer allows the user to render
or discard the Surface-backed frame. This new method allows the user
to control exactly when the frame will be rendered to its SurfaceView.
Available since Android API 21.
Signed-off-by: Aman Gupta <aman@tmm1.net>
This allows switching between absolute (LUFS) and relativ (LU) display
in the status line.
Signed-off-by: Daniel Molkentin <daniel@molkentin.de>
Signed-off-by: Conrad Zelck <c.zelck@imail.de>
This eases meeting the target level during live mixing.
Signed-off-by: Daniel Molkentin <daniel@molkentin.de>
Signed-off-by: Conrad Zelck <c.zelck@imail.de>
Allow to show short-term instead of momentary in gauge. Useful for monitoring
whilst live mixing.
Signed-off-by: Daniel Molkentin <daniel@molkentin.de>
Signed-off-by: Conrad Zelck <c.zelck@imail.de>
This allows getting data only from a specific source IP. This is useful not
only for unicast but for multicast as well because multicast source
subscriptions do not act as source filters for the incoming packets.
Signed-off-by: Marton Balint <cus@passwd.hu>
This option is useful for maintaining input synchronization across N
different hardware devices deployed for 'N-way' redundancy.
The system time of different hardware devices should be synchronized
with protocols such as NTP or PTP, before using this option.
Signed-off-by: Marton Balint <cus@passwd.hu>
Also bump the API version requirement to 10.9.5, because on olders versions
there were some reports of crashes using the undocumented, yet available
BMDDeckLinkDeviceHandle.
Signed-off-by: Marton Balint <cus@passwd.hu>
Sets the level based on the stream properties if it is not explicitly
set by the user. Also add a tier option to set general_tier_flag, since
that affects the level choice.
This was added in libva 2.1.0 (VAAPI 1.1.0). Use AVCodecContext.qmax,
matching the existing behaviour for qmin, and clean up the defaults so
that we only pass min/max when explicitly set.
Query which modes are supported and select between VBR and CBR based
on that - this removes all of the codec-specific rate control mode
selection code.
Previously there was one fixed choice for each codec (e.g. H.265 -> Main
profile), and using anything else then required an explicit option from
the user. This changes to selecting the profile based on the input format
and the set of profiles actually supported by the driver (e.g. P010 input
will choose Main 10 profile for H.265 if the driver supports it).
The entrypoint and render target format are also chosen dynamically in the
same way, removing those explicit selections from the per-codec code.
Create a new AVPacket side data type for Active Format Description,
which mirrors the side data type found in AVFrame. The primary
use case for this is ensuring AFD gets preserved in the V210
encoder, so that the decklink libavdevice can output AFD.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
Also make sure we set the URL context max packet size accordingly.
Based on a patch by Tudor Suciu <tudor.suciu@gmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
AV_CODEC_FLAG_GLOBAL_HEADER should be set before calling avcodec_open2() to have any effect.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
ISMV lacks any sort of edit list support, as well as tfxd is
effectively the PTS of the fragment for most intents and purposes.
Thus, if b-frames are requested without negative CTS offsets you
end up with N frames' worth of delay (tfxd PTS plus the CTS offset
of the first sample). Negative CTS offsets enable the first sample
to have CTS=DTS, and thus a/v desync due to b-frame reorder delay
is avoided.
Existing link is broken.
This patch updates the existing url with a working one.
Signed-off-by: Mina <minasamy_@hotmail.com>
Signed-off-by: Gyan Doshi <ffmpeg@gyani.pro>
Lensfun is a library that applies lens correction to an image using a
database of cameras/lenses (you provide the camera and lens models, and
it uses the corresponding database entry's parameters to apply lens
correction). It is licensed under LGPL3.
The lensfun filter utilizes the lensfun library to apply lens
correction to videos as well as images.
This filter was created out of necessity since I wanted to apply lens
correction to a video and the lenscorrection filter did not work for me.
While this filter requires little info from the user to apply lens
correction, the flaw is that lensfun is intended to be used on indvidual
images. When used on a video, the parameters such as focal length is
constant, so lens correction may fail on videos where the camera's focal
length changes (zooming in or out via zoom lens). To use this filter
correctly on videos where such parameters change, timeline editing may
be used since this filter supports it.
Note that valgrind shows a small memory leak which is not from this
filter but from the lensfun library (memory is allocated when loading
the lensfun database but it somehow isn't deallocated even during
cleanup; it is briefly created in the init function of the filter, and
destroyed before the init function returns). This may have been fixed by
the latest commit in the lensfun repository; the current latest release
of lensfun is almost 3 years ago.
Bi-Linear interpolation is used by default as lanczos interpolation
shows more artifacts in the corrected image in my tests.
The lanczos interpolation is derived from lenstool's implementation of
lanczos interpolation. Lenstool is an app within the lensfun repository
which is licensed under GPL3.
v2 of this patch fixes license notice in libavfilter/vf_lensfun.c
v3 of this patch fixes code style and dependency to gplv3 (thanks to
Paul B Mahol for pointing out the mentioned issues).
v4 of this patch fixes more code style issues that were missed in
v3.
v5 of this patch adds line breaks to some of the documentation in
doc/filters.texi (thanks to Gyan Doshi for pointing out the issue).
v6 of this patch fixes more problems (thanks to Moritz Barsnick for
pointing them out).
v7 of this patch fixes use of sqrt() (changed to sqrtf(); thanks to
Moritz Barsnick for pointing this out). Also should be rebased off of
latest master branch commits at this point.
Signed-off-by: Stephen Seo <seo.disparate@gmail.com>
Set make variable KEEP to non-zero value to preserve temp files
when a test has passed.
Helpful in diagnosing failed tests when test outfile is some type of
single hash and does not reveal differences in processed output.
A generic lavf flag for AAC LATM packetization for the RTP muxer was
added in ef409645f0 and then made inert 20 days later in 0832122880
when a private muxer option was added and the generic flag no longer
read.
If the user provides a valid timecode_format look for timecode of that
format in the capture and if found store it on the video avstream's
metadata.
Slightly modified by Marton Balint to capture per-frame timecode as well.
Signed-off-by: Marton Balint <cus@passwd.hu>
HMS is formatted as HH:MM:SS.mmm, but, HH part is not limited to
24 hours. For example, the the drawn text may look like this:
243029:20:30.342. To present the timestamp in more readable and
user friendly format, this patch provides an additional option
to limit the hour part in the range 0-23.
Note: Actually the above required format can be obtained with
format options 'localtime' and 'gmtime', but, milliseconds part
is not supported in those formats.
The producer reference time box supplies relative wall-clock times
at which movie fragments, or files containing movie fragments
(such as segments) were produced.
The box is mainly useful in live streaming use cases. A media player
can parse the box and utilize the time fields to measure and improve
the latency during real time playout.
Right now segment file format is chosen to be either mp4 or webm based on the codec format.
This patch makes that choice configurable by the user, instead of being decided by the muxer.
Also with this change per-stream choice segment file format(based on codec type) is not possible.
All the output audio and video streams should be in the same file format.
This new optional flag makes it easier to deal with mpegts
samples where the PMT is updated and elementary streams move
to different PIDs in the middle of playback.
Previously, new AVStreams were created per PID, and it was up
to the user to figure out which streams had migrated to a new PID
(by iterating over the list of AVProgram and making guesses), and
switch seamlessly to the new AVStream during playback.
Transcoding or remuxing these streams with ffmpeg on the CLI was
also quite painful, and the user would need to extract each set
of PIDs into a separate file and then stitch them back together.
With this new option, the mpegts demuxer will automatically detect
PMT changes and feed data from the new PID to the original AVStream
that was created for the orignal PID. For mpegts samples with
stream_identifier_descriptor available, the unique ID is used to
merge PIDs together. If the stream id is not available, the demuxer
attempts to map PIDs based on their position within the PMT.
With this change, I am able to playback and transcode/remux these
two samples which previously caused issues:
https://tmm1.s3.amazonaws.com/pmt-version-change.tshttps://kuroko.fushizen.eu/videos/pid_switch_sample.ts
I also have another longer sample in which the PMT changes
repeatedly and ES streams move to different pids three times
during playback:
https://tmm1.s3.amazonaws.com/multiple-pmt-change.ts
Demuxing this sample with the new option shows several new log
messages as the PMT changes are handled:
[mpegts] detected PMT change (program=1, version=3/6, pcr_pid=0xf98/0xfb7)
[mpegts] re-using existing video stream 0 (pid=0xf98) for new pid=0xfb7
[mpegts] re-using existing audio stream 1 (pid=0xf99) for new pid=0xfb8
[mpegts] re-using existing audio stream 2 (pid=0xf9a) for new pid=0xfb9
[mpegts] detected PMT change (program=1, version=6/3, pcr_pid=0xfb7/0xf98)
[mpegts] detected PMT change (program=1, version=3/4, pcr_pid=0xf98/0xf9b)
[mpegts] re-using existing video stream 0 (pid=0xf98) for new pid=0xf9b
[mpegts] re-using existing audio stream 1 (pid=0xf99) for new pid=0xf9c
[mpegts] re-using existing audio stream 2 (pid=0xf9a) for new pid=0xf9d
[mpegts] detected PMT change (program=1, version=4/5, pcr_pid=0xf9b/0xfa9)
[mpegts] re-using existing video stream 0 (pid=0xf98) for new pid=0xfa9
[mpegts] re-using existing audio stream 1 (pid=0xf99) for new pid=0xfaa
[mpegts] re-using existing audio stream 2 (pid=0xf9a) for new pid=0xfab
[mpegts] detected PMT change (program=1, version=5/6, pcr_pid=0xfa9/0xfb7)
Signed-off-by: Aman Gupta <aman@tmm1.net>
These fields will allow the mpegts demuxer to expose details about
the PMT/program which created the AVProgram and its AVStreams.
In mpegts, a PMT which advertises streams has a version number
which can be incremented at any time. When the version changes,
the pids which correspond to each of it's streams can also change.
Since ffmpeg creates a new AVStream per pid by default, an API user
needs the ability to (a) detect when the PMT changed, and (b) tell
which AVStream were added to replace earlier streams.
This has been a long-standing issue with ffmpeg's handling of mpegts
streams with PMT changes, and I found two related patches in the wild
that attempt to solve the same problem:
The first is in MythTV's ffmpeg fork, where they added a
void (*streams_changed)(void*); to AVFormatContext and call it from
their fork of the mpegts demuxer whenever the PMT changes.
The second was proposed by XBMC in
https://ffmpeg.org/pipermail/ffmpeg-devel/2012-December/135036.html,
where they created a new AVMEDIA_TYPE_DATA stream with id=0 and
attempted to send packets to it whenever the PMT changed.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Some filtered mpegts streams may erroneously include PMTs for
programs that are not advertised in the PAT. This confuses ffmpeg
and most players because multiple audio/video streams are created
and it is unclear which ones actually contain data.
See for example https://tmm1.s3.amazonaws.com/unknown-pmts.ts
In this sample, the PAT advertises exactly one program. But the
pid it points to for the program's PMT contains PMTs for other
programs as well. This is because the broadcaster decided to
re-use the same pid for multiple program PMTs.
The hardware that filtered the original multi-program stream
into a single-program stream did so by rewriting the PAT to
contain only the program that was requested. But since it just
passed through the PMT pid referenced in the PAT, multiple PMTs
are still present for the other programs.
Before:
Input #0, mpegts, from 'unknown-pmts.ts':
Duration: 00:00:10.11, start: 80741.189700, bitrate: 9655 kb/s
Program 4
Stream #0:2[0x41]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], Closed Captions, 11063 kb/s, 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc
Stream #0:3[0x44](eng): Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, 5.1(side), fltp, 384 kb/s
Stream #0:4[0x45](spa): Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, fltp, 128 kb/s
No Program
Stream #0:0[0x31]: Video: mpeg2video ([2][0][0][0] / 0x0002), none(tv), 90k tbr, 90k tbn, 90k tbc
Stream #0:1[0x34](eng): Audio: ac3 (AC-3 / 0x332D4341), 0 channels, fltp
Stream #0:5[0x51]: Video: mpeg2video ([2][0][0][0] / 0x0002), none, 90k tbr, 90k tbn
Stream #0:6[0x54](eng): Audio: ac3 (AC-3 / 0x332D4341), 0 channels
With skip_unknown_pmt=1:
Input #0, mpegts, from 'unknown-pmts.ts':
Duration: 00:00:10.11, start: 80741.189700, bitrate: 9655 kb/s
Program 4
Stream #0:0[0x41]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], Closed Captions, 11063 kb/s, 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc
Stream #0:1[0x44](eng): Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, 5.1(side), fltp, 384 kb/s
Stream #0:2[0x45](spa): Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, fltp, 128 kb/s
Signed-off-by: Aman Gupta <aman@tmm1.net>
Generates color bar test patterns based on EBU PAL recommendations.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Tobias Rapp <t.rapp@noa-archive.com>
The FATE tests for MSVC versions older than 2013 are untested in FATE
and apparently are no longer supported.
This commit makes the configure process error out in case an older version
is used, and suggests to use a supported version of MSVC to compile.
This also changes the documentation to reflect this.
As discussed on IRC:
2018-05-12 19:45:16 jamrial then again, most of those were for old msvc, and i think we're not supporting versions older than 2013 (first one c99 compliant) anymore
2018-05-12 19:45:43 +JEEB yea, I think 2013 update 2 is needed
22:53 <@atomnuker> nevcairiel: which commit broke/unsupported support for msvc 2013?
23:23 <@atomnuker> okay, it was JEEB
23:25 <+JEEB> which was for 2012 and older
23:25 <+JEEB> and IIRC we no longer test those in FATE so that was my assumption
23:26 <+JEEB> 2013 is when MS got trolled enough to actually update their C part
23:26 <+JEEB> aand actually advertised FFmpeg support
23:26 <+JEEB> (although it was semi-failing until VS2013 update 1 or 2)
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
Parses the video_stream_descriptor (H.222 2.6.2) to look
for the still_picture_flag. This is exposed to the user
via a new AV_DISPOSITION_STILL_IMAGE.
See for example https://tmm1.s3.amazonaws.com/music-choice.ts,
whose video stream only updates every ~6 seconds.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Most decoders (pgssubdec, ccaption_dec) are using -1 or UINT32_MAX for a
subtitle event which should be cleared at the next event.
Signed-off-by: Marton Balint <cus@passwd.hu>
When use new decode APIs(avcodec_send_packet/avcodec_receive_frame),
don't need to setting the deprecated field refcounted_frames.
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Jun Zhao <mypopydev@gmail.com>
When use new decode APIs(avcodec_send_packet/avcodec_receive_frame),
don't need to setting the deprecated field refcounted_frames.
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Jun Zhao <mypopydev@gmail.com>
When use new decode APIs(avcodec_send_packet/avcodec_receive_frame),
don't need to setting the deprecated field refcounted_frames.
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Jun Zhao <mypopydev@gmail.com>
The logic is applicable only when use_template is enabled and use_timeline
is disabled. The logic monitors the flow of segment indexes. If a streams's
segment index value is not at the expected real time position, then
the logic corrects that index value.
Typically this logic is needed in live streaming use cases. The network
bandwidth fluctuations are common during long run streaming. Each
fluctuation can cause the segment indexes fall behind the expected real
time position. Without this logic, players will not be able to consume
the content, even after encoder's network condition comes back to
normal state.
When use_template is enabled and use_timeline is disabled, typically
it is required to generate the segments at the configured segment duration
rate on an average. This commit is particularly needed to handle the
segmentation when video frame rates are fractional like 29.97 or 59.94 fps.
There are use cases where average segment duration needs to be configured
and muxer is expected to maintain the average segment duration. So, using
the name 'min_seg_duration' will be misleading. So, changing the parameter
name to 'seg_duration', where it can be minimum segment duration or average
segment duration based on the use-case. The additional updates needed for
this functinality are made the sub-sequent patches of this patch series.
In some cases, mainly working with multiprogram mpeg-ts containers as
input, it would be handy to select sub stream of a specific program by
their metadata.
This patch makes it possible to narrow the stream selection among
streams of the specified program by stream metadata.
Examples:
p:601:m:language:hun will select all sub streams of program with id 601
where sub streams have metadata key named 'language' with value 'hun'.
p:602:m:guide will select all sub streams of program with id 602 where
sub streams have metadata key named 'guide'.
Signed-off-by: Bela Bodecs <bodecsb@vivanet.hu>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
When using hls_list_size with hls_flags delete_segments, currently
hls_list_size * 2 +- segments remain on disk. With this new option,
the amount of disk space used can be controlled by the user.
fix ticket: #7131
Signed-off-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Aman Gupta <aman@tmm1.net>
Currently when specifying the program id you can only decide to select
all stream of the specified program (e.g. p:103 will select all streams
of program 103) or narrow the selection to a specific stream sub index
(e.g. p:145:1 will select 2nd stream of program 145.) But you can not
specify like all audio streams of program 145 or 3rd video stream of
program 311.
In some case, mainly working with multiprogram mpeg-ts containers as
input, this feature would be handy.
This patch makes it possible to narrow the stream selection among
streams of the specified program by stream type and optionally its
index. Handled types: a, v, s, d.
Examples: p:601:a will select all audio streams of program 601,
p:603:a:1 will select 2nd audio streams of program 603,
p:604:v:0 will select first video stream of program 604.
Signed-off-by: Bela Bodecs <bodecsb@vivanet.hu>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This seems to confuse Github users into thinking that we may accept pull
requests. We do not accept pull requests.
Sending patches to the ffmpeg-devel mailing list is our preferred method
for users to contribute code.
Signed-off-by: Lou Logan <lou@lrcd.com>
Info about default value of bind_address option and its abbreviated
version (b). Example modified to have named instanced filter and to show
its use.
Signed-off-by: Bela Bodecs <bodecsb@vivanet.hu>
Signed-off-by: Lou Logan <lou@lrcd.com>
PSEUDOPAL pixel formats are not paletted, but carried a palette with the
intention of allowing code to treat unpaletted formats as paletted. The
palette simply mapped the byte values to the resulting RGB values,
making it some sort of LUT for RGB conversion.
It was used for 1 byte formats only: RGB4_BYTE, BGR4_BYTE, RGB8, BGR8,
GRAY8. The first 4 are awfully obscure, used only by some ancient bitmap
formats. The last one, GRAY8, is more common, but its treatment is
grossly incorrect. It considers full range GRAY8 only, so GRAY8 coming
from typical Y video planes was not mapped to the correct RGB values.
This cannot be fixed, because AVFrame.color_range can be freely changed
at runtime, and there is nothing to ensure the pseudo palette is
updated.
Also, nothing actually used the PSEUDOPAL palette data, except xwdenc
(trivially changed in the previous commit). All other code had to treat
it as a special case, just to ignore or to propagate palette data.
In conclusion, this was just a very strange old mechnaism that has no
real justification to exist anymore (although it may have been nice and
useful in the past). Now it's an artifact that makes the API harder to
use: API users who allocate their own pixel data have to be aware that
they need to allocate the palette, or FFmpeg will crash on them in
_some_ situations. On top of this, there was no API to allocate the
pseuo palette outside of av_frame_get_buffer().
This patch not only deprecates AV_PIX_FMT_FLAG_PSEUDOPAL, but also makes
the pseudo palette optional. Nothing accesses it anymore, though if it's
set, it's propagated. It's still allocated and initialized for
compatibility with API users that rely on this feature. But new API
users do not need to allocate it. This was an explicit goal of this
patch.
Most changes replace AV_PIX_FMT_FLAG_PSEUDOPAL with FF_PSEUDOPAL. I
first tried #ifdefing all code, but it was a mess. The FF_PSEUDOPAL
macro reduces the mess, and still allows defining FF_API_PSEUDOPAL to 0.
Passes FATE with FF_API_PSEUDOPAL enabled and disabled. In addition,
FATE passes with FF_API_PSEUDOPAL set to 1, but with allocation
functions manually changed to not allocating a palette.
It works as a drop in replacement for the deprecated av_dup_packet(),
to ensure a packet is reference counted.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: James Almer <jamrial@gmail.com>
avdevice_register_all() is still required to register devices into
lavf (this is required due to lavd being somewhat of a hack).
Signed-off-by: Josh de Kock <josh@itanimul.li>
This reverts commit 0fd475704e.
Revert "lavd: fix iterating of input and output devices"
This reverts commit ce1d77a5e7.
Signed-off-by: Josh de Kock <josh@itanimul.li>