Currently, the code doing this is spread over several places and may
behave in unexpected ways. E.g. automatic 'default' marking is only done
for streams fed by complex filtergraphs. It is also applied in the order
in which the output streams are initialized, which is effectively
random.
Move processing the dispositions at the end of open_output_file(), when
we already have all the necessary information.
Apply the automatic default marking only if no explicit -disposition
options were supplied by the user, and apply it to the first stream of
each type (excluding attached pics) when there is more than one stream
of that type and no default markings were copied from the input streams.
Explicitly document the new behavior.
Changes the results of some tests, where the output file gets a default
disposition, while it previously did not.
When viewing logs, it's sometimes useful to be able to see whether
execution was ended via q command.
Signed-off-by: softworkz <softworkz@hotmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
Prevents desktop stutters caused by the change (specifically on KDE).
We're not a game, we don't actually need it disabled.
Reviewed-by: Marton Balint <cus@passwd.hu>
Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
avcodec_receive_packet() already unreferences the packet on its own.
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The output stream's packet may not have been allocated
at that point. This happens when quitting in the following command line:
$ ./ffmpeg -lavfi abuffer=sample_fmt=u8:sample_rate=48000:channel_layout=stereo -f null -
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Unused since 6b35a83214 (the removal of
ffserver).
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The transpose, rotate, hflip, and vflip filters don't support them.
Fixes ticket #9432.
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: James Almer <jamrial@gmail.com>
Unnecessary since 1f63665ca5, because
the value the option is set to coincides with the default value.
Found-by: Soft Works <softworkz@hotmail.com>
Reviewed-by: Soft Works <softworkz@hotmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
When a device is derived from a source device, there are at least 2
devices, and usually the derived device is the expected device, so let's
pick the last device if user doesn't specify the filter device with
filter_hw_device option
After applying this patch, the command below can work:
$> ffmpeg -init_hw_device vaapi=va:/dev/dri/renderD128 -init_hw_device
qsv=hw@va -f lavfi -i yuvtestsrc -vf
format=nv12,hwupload=extra_hw_frames=64 -c:v h264_qsv -y out.h264
Reviewed-by: Soft Works <softworkz@hotmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
This reverts commit 628a73f8f3.
At the time of said commit there was talk of removing the audio bitrate
"ab" option to bring FFmpeg in line with what Libav has done in 2012 in
commit 041cd5a0c5. By having different
option flags for the "ab" and the ordinay bitrate "b" option is is
possible to have different default bitrates for audio and video. In
order to maintain this behaviour and not break user scripts the commit
to be reverted added code to ffmpeg.c that set the bitrate value to the
audio default for audio codecs, but only if AVCodec.defaults didn't
exist (as in this case the default would be codec-default and not
affected by the "ab" removal).
This had the downside of being an API violation, because
AVCodec.defaults is not a public field. Given that the "ab" option
and its audio-specific default value have never been removed,
said API violation can be simply fixed by reverting said commit.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This way the CLI accepts for "filter_threads" the same values as for the
libavcodec specific option "threads".
Fixes FATE with THREADS=auto which was broken in bdc1bdf3f5.
Signed-off-by: James Almer <jamrial@gmail.com>
These were intended to pass options to auto-inserted avresample
resampling filters. Yet FFmpeg uses swresample for this purpose
(with its own AVDictionary swr_opts similar to resample_opts).
Therefore said options were not forwarded any more since commit
911417f0b34e611bf084319c5b5a4e4e630da940; moreover since commit
420cedd497 avresample options are
not even recognized and ignored any more. Yet there are still
remnants of all of this. This commit gets rid of them.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This reverts commit b3a0548a98.
This breaks the usage of swscale options, scale_sws_opts should be
passed to auto-inserted scale-filters.
Signed-off-by: Linjie Fu <linjie.justin.fu@gmail.com>
When both -filter_threads and -threads are specified, the latter takes
effect. Since -threads is an encoder option and -filter_threads is a
filter option, it makes sense for the -filter_threads to take
precedence.
Besides being nicer code this also has the advantage of not making
assumptions about the internal implementation: While it is documented
that the AVFilter.inputs and AVFilter.outputs arrays are terminated
by a zeroed sentinel, one is not allowed to infer that one can just
check avfilter_pad_get_name(padarray, i) to see whether one has reached
the sentinel:
It could be that the pointer to the string is contained
in a different structure than AVFilterPad that needs to be accessed
first: return pad->struct->string.
It could be that for small strings an internal buffer in
AVFilterPad is used (to avoid a relocation) whereas for longer strings
an external string is used; this is useful to avoid relocations:
return pad->string_ptr ? pad->string_ptr : pad->interal_string
Or it could be that the name has a default value:
return pad->name ? pad->name : "default"
(This actually made sense for us because the name of most of our
AVFilterPads is just "default"; doing so would save lots of relocations.)
The only thing one is allowed to infer from the existence of the
sentinel is that one is allowed to use avfilter_pad_count() to get
the number of pads. Therefore it is used.
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Currently user may use '-init_hw_device type=name' to initialize a hw
device, however the key parameter is ignored when use '-init_hw_device
type=name,key=value'. After applying this patch, user may set key
parameter if needed.
Reviewed-by: Soft Works <softworkz@hotmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
This allows user set hw_device_ctx instead of hw_frames_ctx for QSV
decoders, hence we may remove the ad-hoc libmfx setup code from FFmpeg.
"-hwaccel_output_format format" is applied to QSV decoders after
removing the ad-hoc libmfx code. In order to keep compatibility with old
commandlines, the default format is set to AV_PIX_FMT_QSV, but this
behavior will be removed in the future. Please set "-hwaccel_output_format qsv"
explicitly if AV_PIX_FMT_QSV is expected.
The normal device stuff works for QSV decoders now, user may use
"-init_hw_device args" to initialise device and "-hwaccel_device
devicename" to select a device for QSV decoders.
"-qsv_device device" which was added for workarounding device selection
in the ad-hoc libmfx code still works
For example:
$> ffmpeg -init_hw_device qsv=qsv:hw_any,child_device=/dev/dri/card0
-hwaccel qsv -c:v h264_qsv -i input.h264 -f null -
/dev/dri/renderD128 is actually open for h264_qsv decoder in the above
command without this patch. After applying this patch, /dev/dri/card0
is used.
$> ffmpeg -init_hw_device vaapi=va:/dev/dri/card0 -init_hw_device
qsv=hw@va -hwaccel_device hw -hwaccel qsv -c:v h264_qsv -i input.h264
-f null -
device hw of type qsv is not usable in the above command without this
patch. After applying this patch, this command works as expected.
Reviewed-by: Soft Works <softworkz@hotmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
677a030b26 introduced more printable
side data types in ffprobe, however the Audio Service Type side data
'type' field that was introduced aliases an existing field of the same
name within the side data array, which can lead to JSON output like:
"side_data_list": [
{
"side_data_type": "Audio Service Type",
"type": 0
},
{
"side_data_type": "Stereo 3D",
"type": "side by side",
"inverted": 1
}
]
This, while technically valid JSON, is considered bad practice, since it
forces all downstream users to manually parse it and check all types;
it makes simple deserialization impossible. Worse, in som loosely
type languages, it can lead to silent bugs if exising code assumed
it was a different type.
As such, rename this second "type" field to "service_type".
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
11d3b03fcb added consideration of default stream disposition for audio
and video when choosing the 'best' stream among all the inputs. This can
lead to video streams with lower resolution or audio streams with fewer
channels being selected.
Stream disposition, however, only sets a priority for a stream
among all other streams in the *same input*. It cannot set a priority
for a stream across all inputs.
This patch sets a middle-way and selects the best stream from each file
with default disposition considered. Then it discards disposition weight
and selects best stream as per the original criteria of highest
resolution for video and most channels for audio.
Having the override before autodetection meant that the overridden
value got overwritten by the autodetected result each time,
effectively disabling the ability to utilize the `-top` option
for override purposes.
Somehow I missed this in fbb44bc51a ,
even though the lines were within the context. Probably the code
originally being after this logic had something to do with it,
but previously it only touched the avformat context's codecpar,
which did not affect the encoder codec context whatsoever.
Fixes#9320Fixes#9339
Read rate enforcement delayed till first decoded frame is obtained, to
speed up init of output streams.
Thanks to Linjie Fu <linjie.justin.fu@gmail.com> for the initial patch.
if input start time is not 0 -t is inaccurate doing stream copy,
will record extra duration according to input start time.
it should base on following cases:
input video start time from 60s, duration is 300s,
1. stream copy:
ffmpeg -ss 40 -t 60 -i in.mp4 -c copy -y out.mp4
open_input_file() will seek to 100 and set ts_offset to -100,
process_input() will offset pkt->pts with ts_offset to make it 0,
so when do_streamcopy() with -t, exits when ist->pts >= recording_time.
2. stream copy with -copyts:
ffmpeg -ss 40 -t 60 -copyts -i in.mp4 -c copy -y out.mp4
open_input_file() will seek to 100 and set ts_offset to 0,
process_input() will keep raw pkt->pts as ts_offset is 0,
so when do_streamcopy() with -t, exits when
ist->pts >= (recording_time+f->start_time+f->ctx->start_time).
3. stream copy with -copyts -start_at_zero:
ffmpeg -ss 40 -t 60 -copyts -start_at_zero -i in.mp4 -c copy -y out.mp4
open_input_file() will seek to 120 and set ts_offset to -60 as start_to_zero option,
process_input() will offset pkt->pts with input file start time,
so when do_streamcopy() with -t, exits when ist->pts >= (recording_time+f->start_time).
0 60 40 60 360
|_______|_____|_______|_______________________|
start -ss -t
This fixes ticket #9141.
Signed-off-by: Shiwang.Xie <shiwang.xie666@outlook.com>
xmllint (silently) replaces the ' with " when fixing and validating the output
of ffprobe in fate-ffprobe_xsd.
Reviewed-by: Tobias Rapp <t.rapp@noa-archive.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Besides being unnecessary it is also safer: If the error for an
unrecognized option were triggered (which seems to be impossible right
now), it might be that the stream whose codecpar is accessed is NULL.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Otherwise the rate emulation logic in `transcode_step` never gets
hit, and the unavailability flag never gets reset, leading to an
eternal loop with some rate emulation use cases.
This change was missed during the rework of ffmpeg.c, in which
encoder initialization was moved further down the time line in
commit 67be1ce0c6 . Previously,
as the encoder initialization had happened earlier, this state was
not possible (flow getting as far as hitting the rate emulation logic,
yet not having the encoder initialized yet).
Fixes#9160
Use AVSTREAM_EVENT_FLAG_NEW_PACKETS instead, which should provide the
same information in this case.
Finishes removing all uses of this field as started by 87f0c8280c.
Signed-off-by: James Almer <jamrial@gmail.com>
This also allows to exclusively use pointers to const AVCodec in
fftools/ffmpeg_opt.c.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Deprecated in c29038f304.
The resample filter based upon this library has been removed as well.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Signed-off-by: James Almer <jamrial@gmail.com>
The "packets_and_frames" element has been added to ffprobe.xsd in
0c9f0da0f7 but apparently removing the
check in ffprobe.c has been forgotten.
Signed-off-by: Tobias Rapp <t.rapp@noa-archive.com>
The MJPEG encoder supports some pixel format/color range combinations
only when strictness is set to unofficial or less. Before commit
059fc2d9da said encoder's pix_fmts array
only included the pixel formats supported with default strictness.
When strictness was <= unofficial, fftools/ffmpeg_filter.c used
an extended list of pixel formats instead of the encoder's including
the pixel formats only supported when strictness <= unofficial.
Said commit turned the logic around: The encoder's pix_fmts array now
included all pixel formats and fftools/ffmpeg_filter.c instead used
a small list of all pixel formats supported when strictness is >
unofficial and the encoder's pixel formats instead. In particular,
the codec's pix_fmt is not used when strictness is normal.
This works for the mjpeg encoder; yet it did not work for other
(hardware-based) mjpeg encoders, because the check for whether one is
using the MJPEG encoder is wrong: It just checks the codec id.
So if one used strict unofficial with a hardware-accelerated MJPEG
encoder before commit 059fc2d9da, the unofficial (non-hardware)
pixel formats of the MJPEG encoder would be used; since said commit
the codec's pixel formats are overridden at ordinary strictness
by the ordinary MJPEG pixel formats. This leads to format conversion
errors lateron which were reported in #9186.
The solution to this is to check AVCodec.name instead of its id.
Fixes ticket #9186.
Tested-by: Eoff, Ullysses A <ullysses.a.eoff@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Currently said list contains only the pixel formats that are always
supported irrespective of the range and the value of
strict_std_compliance. This makes the MJPEG encoder an outlier as all
other codecs put all potentially supported pixel formats into said list
and error out if the chosen pixel format is unsupported. This commit
brings it therefore in line with the other encoders.
The behaviour of fftools/ffmpeg_filter.c has been preserved. A more
informed decision would be possible if colour range were available
at this point, but it isn't.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Only one character is actually rewritten.
Fixes truncation warnings, such as
warning: ‘strncpy’ output truncated before terminating nul copying 3 bytes from a string of the same length [-Wstringop-truncation]
in gcc 10.2.0
This cap is currently used to mark multithreading-capable codecs that
wrap external libraries with their own multithreading code. The name is
highly confusing for our API users, since libavcodec ALWAYS handles
thread_count=0 (see commit message in previous commit). Therefore rename
the cap and update its documentation to make its meaning clear.
The old name is kept deprecated until next+1 major bump.
If the window is resized it was possible that xpos pointed outside the
visualization texture. By rearranging the overflow check we make sure this (and
a crash) does not happen.
We also don't have to use xleft for start position, as that is 0 anyways, and
if we ever want to take into account xleft then the texture should be
positioned accordingly when rendering.
Signed-off-by: Marton Balint <cus@passwd.hu>
The obstacle to do so was in filter_codec_opts: It uses searches
the AVCodec for options via the AV_OPT_SEARCH_FAKE_OBJ method, which
requires using a void * that points to a pointer to a const AVClass.
When using const AVCodec *, one can not simply use a pointer that points
to the AVCodec's pointer to its AVClass, as said pointer is const, too.
This is fixed by using a temporary pointer to the AVClass.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
It only affects the old and deprecated avcodec_decode_(video2|audio4)
API which is no longer used here.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
avcodec_find_best_pix_fmt_of_2 has been moved to libavutil in
617e866e25.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The FF_API macros are private and must not be used by external callers.
As the fields in question are to be removed without replacement, just
drop them.
The fields are:
AVPacket.convergence_duration
AVCodecContext.time_base
AVCodecContext.timecode_frame_start
AV_PIX_FMT_FLAG_PSEUDOPAL pixel descriptor flag
The metadata company_name, product_name, product_version from input
file will be deleted to avoid overwriting information
Please to test with below commands:
./ffmpeg -i ../fate-suite/mxf/Sony-00001.mxf -c:v copy -c:a copy out.mxf
and
./ffmpeg -i ../fate-suite/mxf/Sony-00001.mxf -c:v copy -c:a copy \
-metadata company_name="xxx" \
-metadata product_name="xxx" \
-metadata product_version="xxx" \
out.mxf
Reviewed-by: Tomas Härdin <tjoppen@acc.umu.se>
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
The st->codec values are updated based on the lowres factor by
avformat_find_stream_info() when it runs an instance of the decoder internally,
and the same thing happens in ffmpeg.c when we open ist->dec_ctx with
avcodec_open2(), so these assignments are redundant.
Signed-off-by: James Almer <jamrial@gmail.com>
As per signal() help (man 2 signal) the semantics of using signal may
vary across platforms. It is suggested to use sigaction() instead.
Reviewed-by: Zane van Iperen <zane@zanevaniperen.com>
Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
The first stats is printed after the initial stats_period has elapsed. With a large period,
it may appear that ffmpeg has frozen at startup.
The initial stats is now printed after the first transcode_step.
At present, progress stats are updated at a hardcoded interval of
half a second. For long processes, this can lead to bloated
logs and progress reports.
Users can now set a custom period using option -stats_period
Default is kept at 0.5 seconds.
They add considerable complexity to frame-threading implementation,
which includes an unavoidably leaking error path, while the advantages
of this option to the users are highly dubious.
It should be always possible and desirable for the callers to make their
get_buffer2() implementation thread-safe, so deprecate this option.
We now have the possibility of getting AVFrames here, and we should
not touch the muxer's codecpar after writing the header.
Results of FATE tests change as the MXF and Matroska muxers actually
write down the field/frame coding type of a stream in their
respective headers. Before this change, these values in codecpar
would only be set after the muxer was initialized. Now, the
information is also available for encoder and muxer initialization.
Additionally, reap the first rewards by being able to set the
color related encoding values based on the passed AVFrame.
The only tests that seem to have changed their results with this
change seem to be the MXF tests. There, the muxer writes the
limited/full range flag to the output container if the encoder
is not set to "unspecified".
- For video, this means a single initialization point in do_video_out.
- For audio we unfortunately need to do it in two places just
before the buffer sink is utilized (if av_buffersink_get_samples
would still work according to its specification after a call to
avfilter_graph_request_oldest was made, we could at least remove
the one in transcode_step).
Other adjustments to make things work:
- As the AVFrame PTS adjustment to encoder time base needs the encoder
to be initialized, so it is now moved to do_{video,audio}_out,
right after the encoder has been initialized. Due to this,
the additional parameter in do_video_out is removed as it is no
longer necessary.
This way the old max queue size limit based behavior for streams
where each individual packet is large is kept, while for smaller
streams more packets can be buffered (current default is at 50
megabytes per stream).
For some explanation, by default ffmpeg copies packets from before
the appointed seek point/start time and puts them into the local
muxing queue. Before, it getting utilized was much less likely
since as soon as the filter chain was initialized, the encoder
(and thus output stream) was also initialized.
Now, since we will be pushing the encoder initialization to when the
first AVFrame is decoded and filtered - which only happens after
the exact seek point is hit as packets are ignored until then -
this queue will be seeing much more usage.
In more layman's terms, this attempts to fix cases such as where:
- seek point ends up being 5 seconds before requested time.
- audio is set to copy, and thus immediately begins filling the
muxing queue.
- video is being encoded, and thus all received packets are skipped
until the requested time is hit.
The user has no business modifying the underlying AVCodec.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The AVFilterInOuts normally get freed in init_output_filter() when
the corresponding streams get created; yet if an error happens before
one reaches said point, they leak. Therefore this commit makes
ffmpeg_cleanup free them, too.
Fixes ticket #8267.
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Threaded input can increase smoothness of e.g. x11grab significantly. Before
this patch, in order to activate threaded input the user had to specify a
"dummy" additional input, with this change it is no longer required.
Signed-off-by: Marton Balint <cus@passwd.hu>
This useful, because by ffprobe's very nature, you use it to probe
a file and find out what it is. Requiring every format private option
to be known to the demuxer forces one to run ffprobe twice, if one
wants to use ffprobe in a generic way.
For example, say one wants to probe all user-uploaded files, while
also ignoring edit lists for any MP4s that are uploaded. Currently,
you'd have to run ffprobe twice: once to identify the format, and
once again to actually probe the metadata you want. After this
patch, you could set -ignore_editlist 1 on every call and only
probe once.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Currently, ffmpeg inserts scale filter by default in the filter graph
to force the whole decoded stream to scale into the same size with the
first frame. It's not quite make sense in resolution changing cases if
user wants the rawvideo without any scale.
Using autoscale/noautoscale as an output option to indicate whether auto
inserting the scale filter in the filter graph:
-noautoscale or -autoscale 0:
disable the default auto scale filter inserting.
ffmpeg -y -i input.mp4 out1.yuv -noautoscale out2.yuv -autoscale 0 out3.yuv
Update docs.
Suggested-by: Mark Thompson <sw@jkqxz.net>
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: U. Artie Eoff <ullysses.a.eoff@intel.com>
Signed-off-by: Linjie Fu <linjie.fu@intel.com>