The -shortest option (which finishes the output file at the time the
shortest stream ends) is currently implemented by faking the -t option
when an output stream ends. This approach is fragile, since it depends
on the frames/packets being processed in a specific order. E.g. there
are currently some situations in which the output file length will
depend unpredictably on unrelated factors like encoder delay. More
importantly, the present work aiming at splitting various ffmpeg
components into different threads will make this approach completely
unworkable, since the frames/packets will arrive in effectively random
order.
This commit introduces a "sync queue", which is essentially a collection
of FIFOs, one per stream. Frames/packets are submitted to these FIFOs
and are then released for further processing (encoding or muxing) when
it is ensured that the frame in question will not cause its stream to
get ahead of the other streams (the logic is similar to libavformat's
interleaving queue).
These sync queues are then used for encoding and/or muxing when the
-shortest option is specified.
A new option – -shortest_buf_duration – controls the maximum number of
queued packets, to avoid runaway memory usage.
This commit changes the results of the following tests:
- copy-shortest[12]: the last audio frame is now gone. This is
correct, since it actually outlasts the last video frame.
- shortest-sub: the video packets following the last subtitle packet are
now gone. This is also correct.
The following commits will add a new buffering stage after bitstream
filters, which should not be taken into account for choosing next
output.
OutputStream.last_mux_dts is also used by the muxing code to make up
missing DTS values - that field is now moved to the muxer-private
MuxStream object.
The current placement of this free is historical - it used to be
followed by avcodec_close(), since removed.
The proper place for freeing the stats is currently right before the
encoder context itself is freed.
It is currently called from two places:
- output_packet() in ffmpeg.c, which submits the newly available output
packet to the muxer
- from of_check_init() in ffmpeg_mux.c after the header has been
written, to flush the muxing queue
Some packets will thus be processed by this function twice, so it
requires an extra parameter to indicate the place it is called from and
avoid modifying some state twice.
This is fragile and hard to follow, so split this function into two.
Also rename of_write_packet() to of_submit_packet() to better reflect
its new purpose.
The muxing queue currently lives in OutputStream, which is a very large
struct storing the state for both encoding and muxing. The muxing queue
is only used by the code in ffmpeg_mux, so it makes sense to restrict it
to that file.
This makes the first step towards reducing the scope of OutputStream.
Figure out earlier whether the output stream/file should be bitexact and
store this information in a flag in OutputFile/OutputStream.
Stop accessing the muxer in set_encoder_id(), which will become
forbidden in future commits.
The current code postpones closing the files until after printing the
final report, which accesses the output file size. Deal with this by
storing the final file size before closing the file.
Move the file size checking code to ffmpeg_mux. Use the recently
introduced of_filesize(), making this code consistent with the size
shown by print_report().
Move header_written into it, which is not (and should not be) used by
any code outside of ffmpeg_mux.
In the future this context will contain more muxer-private state that
should not be visible to other code.
This is a per-file input option that adjusts an input's timestamps
with reference to another input, so that emitted packet timestamps
account for the difference between the start times of the two inputs.
Typical use case is to sync two or more live inputs such as from capture
devices. Both the target and reference input source timestamps should be
based on the same clock source.
If either input lacks starting timestamps, then no sync adjustment is made.
These packets need not be writable (and are not modified by us),
so it is best to access them via const uint8_t*.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
regression since 13350e81fd
Fix looking for .ffmpeg subfolder in FFMPEG_DATADIR and inversely not in HOME.
Fix search order (documentation).
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
Frame counters can overflow relatively easily (INT_MAX number of frames is
slightly more than 1 year for 60 fps content), so make sure we are always
using 64 bit values for them.
A live stream can easily run for more than a year and the framedup logic breaks
on an overflow.
Signed-off-by: Marton Balint <cus@passwd.hu>
GL and Metal cache the state at time of texture creation. GLES2 and
Direct3D 11 use the state at time of the render copy call.
So the only way we can get the correct behavior consistently is by
making sure the state is set for both the upload *and* the draw call.
This probably isn't our bug to fix (upstream should make itself behave
consistently and also document its functions), but as it stands,
`ffplay` is misrendering BT.709 as BT.601 on my stock Linux system, and
that leaves a bad taste in my mouth.
Signed-off-by: Niklas Haas <git@haasn.dev>
This enables printing to a resource specified with -o OUTPUT.
In case the output is not specified, prints to stdout as usual.
Address issue: http://trac.ffmpeg.org/ticket/8024
Signed-off-by: Marton Balint <cus@passwd.hu>
Option was added in commit 39aafa5ee9 but was never documented.
Also does not seem there are current use cases for it,
tests for which it was introduced are still working therefore we drop
it altogether.
Indirectly fix trac issue: http://trac.ffmpeg.org/ticket/1698
Signed-off-by: Marton Balint <cus@passwd.hu>
Fix JSON output in case a frame or packet section contains a nested section.
Fix trac issue http://trac.ffmpeg.org/ticket/8680.
Signed-off-by: Marton Balint <cus@passwd.hu>
This is a more appropriate place for this code, since the values we read
from AV_PKT_DATA_QUALITY_STATS side data are primarily written into
video stats. This ensures that the values written into stats actually
apply to the right packet.
Rename the function to update_video_stats() to better reflect its new
purpose.
It retrieves libavformat's internal dts value (contrary to the
function's name), which is not only incorrect in general, but also
unnecessary because we can access the packet directly.
Its use for muxing is not documented, in practice it is incremented per
each packet successfully passed to the muxer's write_packet(). Since
there is a lot of indirection between ffmpeg receiving a packet from the
encoder and it actually being written (e.g. bitstream filters, the
interleaving queue), using nb_frames here is incorrect.
Add a new counter for packets received from encoder instead.
Since every DLL can use an individual CRT on Windows, having
an exported function that opens a FILE* won't work if that
FILE* is going to be used from a different DLL (or from user
application code).
Internally within the libraries, the issue can be worked around
by duplicating the function in all libraries (this already happened
implicitly because the function resided in file_open.c) and renaming
the function to ff_fopen_utf8 (so that it doesn't end up exported from
the DLLs) and duplicating it in all libraries that use it.
This makes the avpriv_fopen_utf8 / ff_fopen_utf8 function work in
the exact same way as the existing avpriv_open / ff_open, with the
same setup as introduced in e743e7ae6e.
That mechanism doesn't work for external users, thus deprecate the
existing function.
Signed-off-by: Martin Storsjö <martin@martin.st>
Provide a header based inline reimplementation of it.
Using av_fopen_utf8 doesn't work outside of the libraries when built
with MSVC as shared libraries (in the default configuration, where
each DLL gets a separate statically linked CRT).
Signed-off-by: Martin Storsjö <martin@martin.st>
This field is currently used by checks
- skipping packets before the first keyframe
- skipping packets before start time
to test whether any packets have been output already. But since
frame_number is incremented after the bitstream filters are applied
(which may involve delay), this use is incorrect. The keyframe check
works around this by adding an extra flag, the start-time check does
not.
Simplify both checks by replacing the seen_kf flag with a flag tracking
whether any packets have been output by do_streamcopy().
Look for the generic "USR" labels instead of "?" to skip channels with no
known names, and actually print the decomposition of standard channel layouts.
Signed-off-by: James Almer <jamrial@gmail.com>
Especially useful when debugging subtitle output, but also shows
if values are set or not for demux and encoding.
Co-authored-by: Jan Ekström <jan.ekstrom@24i.com>
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
It allocates a dummy sws/swr context and tries setting options on it,
apparently to check if they are valid. This is redundant, since the
options will be checked if/when they are later applied on a context that
is actually used for conversion.
It tries to process any unhandled options as AVOptions. Handle this
directly in cmdutils.c, without resorting to a confusing fake option
definition (which is currently visible to the users in -help output).
This avoids unnecessary rebuilds of most source files if only the
list of enabled components has changed, but not the other properties
of the build, set in config.h.
Signed-off-by: Martin Storsjö <martin@martin.st>
This avoids including version.h in all source files, avoiding
unnecessary rebuilds when the version number is bumped. Only
version_major.h is included by the main header, which defines
availability of e.g. FF_API_* macros, and which is bumped much
less often.
This isn't done for libavutil/version.h, because that header needs
to be included essentially everywhere due to LIBAVUTIL_VERSION_INT
being used wherever an AVClass is constructed.
Signed-off-by: Martin Storsjö <martin@martin.st>
The earlier code has ignored it for all stream types except
video and subtitles, probably because audio was presumed
to only consist of keyframes. Yet this assumption is not true
for e.g. TrueHD.
Reviewed-by: Jan Ekström <jeebjp@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Bitstream filters inserted between the input and output were never drained,
resulting in packets being lost if the bsf had any buffered.
Signed-off-by: James Almer <jamrial@gmail.com>
A decoder is only opened if there is a decoder for the codec,
so every AVCodecContext here has AVCodecContext.codec set.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This is a prerequisite to continue using the decoder at all
to decode the next interval (if any).
This fixes a regression introduced in commit
2a88ebd096 and reported in ticket #8657.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Fixes ticket 9086.
Since early 2021, some of YouTube's VP9 encodes have non-monotonous DTS.
This makes ffmpeg fatally fail when trying to copy or encode the V9 video.
ffmpeg already includes functionality to correct this, however it was
disabled without explanation for VP9 stream copies in
2e6636aa87
This patch restores the DTS correction logic, and allows ffmpeg to correctly
encode (invalid) videos produced by youtube.com. I have verified that frames
are NOT being cut (so it does not re-introduce 4313).
Reviwed-by: Ronald S. Bultje <rsbultje@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
At present, side data printing forces display for all levels i.e.
stream, packets and frames. This can bloat output and also force
decode of all frames in selected streams.
Now, stream_side_data[=type], packet_side_data[=type] &
frame_side_data[=type] can be used with -show_entries to specify carrier
element.
fftools now print info about what media type(s), if any, are provided by
sink and source avdevices.
Signed-off-by: Diederick Niehorster <dcnieho@gmail.com>
Reviewed-by: Roger Pack <rogerdpack2@gmail.com>
The transpose filter has modes equivalent to "rotation by 90°/270°"
followed by horizontal flips.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In case of an orthogonal transformation av_display_rotation_get()
returns the (anticlockwise) degree that the unit vector in x-direction
gets rotated by; get_rotation in cmdutils.c makes a clockwise degree
out of this. So if one inserts a transpose filter corresponding to
this degree, then the x-vector gets mapped correctly and there are
two possibilities for image of the y-vector, namely the two unit
vectors orthogonal to the image of the x-vector.
E.g. if the x-vector gets rotated by 90° clockwise, then the two
possibilities for the y-vector are the unit vector in x direction
or its opposite. The latter case is a simple 90° rotation for both
vectors* whereas the former is a simple 90° clockwise rotation followed
by a horizontal flip. These two cases can be distinguished by looking
at the x-coordinate of the image of the y-vector, i.e. by looking
at displaymatrix[3]. Similarly for the case of a 270° clockwise
rotation.
These two cases were previously wrong (they were made to match
wrongly parsed exif rotation tag values).
*: For display matrices, the y-axis points downward.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
If 'opts' could not be allocated, exiting the program to avoid crash when release it.
Reported-by: TOTE Robot <oslab@tsinghua.edu.cn>
Signed-off-by: Yu Yang <yuyang14@kuaishou.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
If the input stream framerate is known, it will be configured on the
relevant filtergraph input and get propagated to the output stream in
the above line. That makes these assignments redundant.
Do this by switching from the dynamic buffer API to the AVBPrint API;
the former has no defined way to check for errors.
This also avoids allocating an AVIOContext.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Do this by switching from the dynamic buffer API to the AVBPrint API;
the former has no defined way to check for errors.
This also avoids allocating an AVIOContext.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The only caller of do_video_out() doesn't need the frame afterwards,
ergo one can replace an av_frame_ref() by av_frame_move_ref().
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Currently, adding a (separately allocated) element to a list of pointers
works by first reallocating the array of pointers and (on success)
incrementing its size and only then allocating the new element.
If the latter allocation fails, the size is inconsistent, i.e.
array[nb_array_elems - 1] is NULL. Our cleanup code crashes in such
scenarios.
Fix this by adding an auxiliary function that atomically allocates
and adds a new element to a list of pointers.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
choose_pix_fmts() used the dynamic buffer API to write strings;
as is common among uses of this API, only opening the dynamic buffer
was checked, but not the end result, leading to crashes in case
of allocation failure.
Furthermore, some static strings were duplicated; the allocations
performed here were not properly checked: Allocation failure would
be treated as "could not determine pixel format".
The first issue is fixed by switching to the AVBPrint API which allows
to easily perform checks at the end. Furthermore, the internal buffer
avoids almost all allocations in case the AVBPrint is used.
The AVBPrint also allows to solve the second issue in an elegant way,
because it allows to return the static strings directly.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It is not really natural, it requires internal allocations
of its own and its error handling is horrible (i.e. the implicit
(re)allocations here are unchecked).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Don't mark all streams as finished, instead make sync_opts keep track of the
stream's duration, and set recording_time to it, same as in transcoding paths.
Fixes tickets #9512 and #9513.
Signed-off-by: James Almer <jamrial@gmail.com>
This was almost completely redundant. The only functionality that's no longer
available after this removal is the videotoolbox_pixfmt arg, which has been
obsolete for several years.
The types used by the AVFifo API are inconsistent:
av_fifo_(space|size)() returns an int; av_fifo_alloc() takes an
unsigned, other parts use size_t. This commit therefore ensures
that the size of the muxing_queue FIFO never exceeds INT_MAX.
While just at it, also make sure not to call av_fifo_size()
unnecessarily often.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
send_frame_to_filters() sends a frame to all the filters that
need said frame; for every filter except the last one this involves
creating a reference to the frame, because
av_buffersrc_add_frame_flags() by default takes ownership of
the supplied references. Yet said function has a flag which
changes its behaviour to create a reference itself.
This commit uses this flag and stops creating the references itself;
this allows to remove the spare AVFrame holding the temporary
references; it also avoids unreferencing said frame.
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
As well as the custom get_buffer2() implementation which would become a
redundant wrapper for avcodec_default_get_buffer2() after this
Signed-off-by: James Almer <jamrial@gmail.com>
Treat values returned from av_dict_get() as const, since they are
internal to AVDictionary.
Signed-off-by: Chad Fraleigh <chadf@triularity.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Currently, the code doing this is spread over several places and may
behave in unexpected ways. E.g. automatic 'default' marking is only done
for streams fed by complex filtergraphs. It is also applied in the order
in which the output streams are initialized, which is effectively
random.
Move processing the dispositions at the end of open_output_file(), when
we already have all the necessary information.
Apply the automatic default marking only if no explicit -disposition
options were supplied by the user, and apply it to the first stream of
each type (excluding attached pics) when there is more than one stream
of that type and no default markings were copied from the input streams.
Explicitly document the new behavior.
Changes the results of some tests, where the output file gets a default
disposition, while it previously did not.
When viewing logs, it's sometimes useful to be able to see whether
execution was ended via q command.
Signed-off-by: softworkz <softworkz@hotmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
Prevents desktop stutters caused by the change (specifically on KDE).
We're not a game, we don't actually need it disabled.
Reviewed-by: Marton Balint <cus@passwd.hu>
Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
avcodec_receive_packet() already unreferences the packet on its own.
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The output stream's packet may not have been allocated
at that point. This happens when quitting in the following command line:
$ ./ffmpeg -lavfi abuffer=sample_fmt=u8:sample_rate=48000:channel_layout=stereo -f null -
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Unused since 6b35a83214 (the removal of
ffserver).
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The transpose, rotate, hflip, and vflip filters don't support them.
Fixes ticket #9432.
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: James Almer <jamrial@gmail.com>
Unnecessary since 1f63665ca5, because
the value the option is set to coincides with the default value.
Found-by: Soft Works <softworkz@hotmail.com>
Reviewed-by: Soft Works <softworkz@hotmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
When a device is derived from a source device, there are at least 2
devices, and usually the derived device is the expected device, so let's
pick the last device if user doesn't specify the filter device with
filter_hw_device option
After applying this patch, the command below can work:
$> ffmpeg -init_hw_device vaapi=va:/dev/dri/renderD128 -init_hw_device
qsv=hw@va -f lavfi -i yuvtestsrc -vf
format=nv12,hwupload=extra_hw_frames=64 -c:v h264_qsv -y out.h264
Reviewed-by: Soft Works <softworkz@hotmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
This reverts commit 628a73f8f3.
At the time of said commit there was talk of removing the audio bitrate
"ab" option to bring FFmpeg in line with what Libav has done in 2012 in
commit 041cd5a0c5. By having different
option flags for the "ab" and the ordinay bitrate "b" option is is
possible to have different default bitrates for audio and video. In
order to maintain this behaviour and not break user scripts the commit
to be reverted added code to ffmpeg.c that set the bitrate value to the
audio default for audio codecs, but only if AVCodec.defaults didn't
exist (as in this case the default would be codec-default and not
affected by the "ab" removal).
This had the downside of being an API violation, because
AVCodec.defaults is not a public field. Given that the "ab" option
and its audio-specific default value have never been removed,
said API violation can be simply fixed by reverting said commit.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This way the CLI accepts for "filter_threads" the same values as for the
libavcodec specific option "threads".
Fixes FATE with THREADS=auto which was broken in bdc1bdf3f5.
Signed-off-by: James Almer <jamrial@gmail.com>
These were intended to pass options to auto-inserted avresample
resampling filters. Yet FFmpeg uses swresample for this purpose
(with its own AVDictionary swr_opts similar to resample_opts).
Therefore said options were not forwarded any more since commit
911417f0b34e611bf084319c5b5a4e4e630da940; moreover since commit
420cedd497 avresample options are
not even recognized and ignored any more. Yet there are still
remnants of all of this. This commit gets rid of them.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This reverts commit b3a0548a98.
This breaks the usage of swscale options, scale_sws_opts should be
passed to auto-inserted scale-filters.
Signed-off-by: Linjie Fu <linjie.justin.fu@gmail.com>
When both -filter_threads and -threads are specified, the latter takes
effect. Since -threads is an encoder option and -filter_threads is a
filter option, it makes sense for the -filter_threads to take
precedence.
Besides being nicer code this also has the advantage of not making
assumptions about the internal implementation: While it is documented
that the AVFilter.inputs and AVFilter.outputs arrays are terminated
by a zeroed sentinel, one is not allowed to infer that one can just
check avfilter_pad_get_name(padarray, i) to see whether one has reached
the sentinel:
It could be that the pointer to the string is contained
in a different structure than AVFilterPad that needs to be accessed
first: return pad->struct->string.
It could be that for small strings an internal buffer in
AVFilterPad is used (to avoid a relocation) whereas for longer strings
an external string is used; this is useful to avoid relocations:
return pad->string_ptr ? pad->string_ptr : pad->interal_string
Or it could be that the name has a default value:
return pad->name ? pad->name : "default"
(This actually made sense for us because the name of most of our
AVFilterPads is just "default"; doing so would save lots of relocations.)
The only thing one is allowed to infer from the existence of the
sentinel is that one is allowed to use avfilter_pad_count() to get
the number of pads. Therefore it is used.
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Currently user may use '-init_hw_device type=name' to initialize a hw
device, however the key parameter is ignored when use '-init_hw_device
type=name,key=value'. After applying this patch, user may set key
parameter if needed.
Reviewed-by: Soft Works <softworkz@hotmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
This allows user set hw_device_ctx instead of hw_frames_ctx for QSV
decoders, hence we may remove the ad-hoc libmfx setup code from FFmpeg.
"-hwaccel_output_format format" is applied to QSV decoders after
removing the ad-hoc libmfx code. In order to keep compatibility with old
commandlines, the default format is set to AV_PIX_FMT_QSV, but this
behavior will be removed in the future. Please set "-hwaccel_output_format qsv"
explicitly if AV_PIX_FMT_QSV is expected.
The normal device stuff works for QSV decoders now, user may use
"-init_hw_device args" to initialise device and "-hwaccel_device
devicename" to select a device for QSV decoders.
"-qsv_device device" which was added for workarounding device selection
in the ad-hoc libmfx code still works
For example:
$> ffmpeg -init_hw_device qsv=qsv:hw_any,child_device=/dev/dri/card0
-hwaccel qsv -c:v h264_qsv -i input.h264 -f null -
/dev/dri/renderD128 is actually open for h264_qsv decoder in the above
command without this patch. After applying this patch, /dev/dri/card0
is used.
$> ffmpeg -init_hw_device vaapi=va:/dev/dri/card0 -init_hw_device
qsv=hw@va -hwaccel_device hw -hwaccel qsv -c:v h264_qsv -i input.h264
-f null -
device hw of type qsv is not usable in the above command without this
patch. After applying this patch, this command works as expected.
Reviewed-by: Soft Works <softworkz@hotmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
677a030b26 introduced more printable
side data types in ffprobe, however the Audio Service Type side data
'type' field that was introduced aliases an existing field of the same
name within the side data array, which can lead to JSON output like:
"side_data_list": [
{
"side_data_type": "Audio Service Type",
"type": 0
},
{
"side_data_type": "Stereo 3D",
"type": "side by side",
"inverted": 1
}
]
This, while technically valid JSON, is considered bad practice, since it
forces all downstream users to manually parse it and check all types;
it makes simple deserialization impossible. Worse, in som loosely
type languages, it can lead to silent bugs if exising code assumed
it was a different type.
As such, rename this second "type" field to "service_type".
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
11d3b03fcb added consideration of default stream disposition for audio
and video when choosing the 'best' stream among all the inputs. This can
lead to video streams with lower resolution or audio streams with fewer
channels being selected.
Stream disposition, however, only sets a priority for a stream
among all other streams in the *same input*. It cannot set a priority
for a stream across all inputs.
This patch sets a middle-way and selects the best stream from each file
with default disposition considered. Then it discards disposition weight
and selects best stream as per the original criteria of highest
resolution for video and most channels for audio.
Having the override before autodetection meant that the overridden
value got overwritten by the autodetected result each time,
effectively disabling the ability to utilize the `-top` option
for override purposes.
Somehow I missed this in fbb44bc51a ,
even though the lines were within the context. Probably the code
originally being after this logic had something to do with it,
but previously it only touched the avformat context's codecpar,
which did not affect the encoder codec context whatsoever.
Fixes#9320Fixes#9339
Read rate enforcement delayed till first decoded frame is obtained, to
speed up init of output streams.
Thanks to Linjie Fu <linjie.justin.fu@gmail.com> for the initial patch.
if input start time is not 0 -t is inaccurate doing stream copy,
will record extra duration according to input start time.
it should base on following cases:
input video start time from 60s, duration is 300s,
1. stream copy:
ffmpeg -ss 40 -t 60 -i in.mp4 -c copy -y out.mp4
open_input_file() will seek to 100 and set ts_offset to -100,
process_input() will offset pkt->pts with ts_offset to make it 0,
so when do_streamcopy() with -t, exits when ist->pts >= recording_time.
2. stream copy with -copyts:
ffmpeg -ss 40 -t 60 -copyts -i in.mp4 -c copy -y out.mp4
open_input_file() will seek to 100 and set ts_offset to 0,
process_input() will keep raw pkt->pts as ts_offset is 0,
so when do_streamcopy() with -t, exits when
ist->pts >= (recording_time+f->start_time+f->ctx->start_time).
3. stream copy with -copyts -start_at_zero:
ffmpeg -ss 40 -t 60 -copyts -start_at_zero -i in.mp4 -c copy -y out.mp4
open_input_file() will seek to 120 and set ts_offset to -60 as start_to_zero option,
process_input() will offset pkt->pts with input file start time,
so when do_streamcopy() with -t, exits when ist->pts >= (recording_time+f->start_time).
0 60 40 60 360
|_______|_____|_______|_______________________|
start -ss -t
This fixes ticket #9141.
Signed-off-by: Shiwang.Xie <shiwang.xie666@outlook.com>
xmllint (silently) replaces the ' with " when fixing and validating the output
of ffprobe in fate-ffprobe_xsd.
Reviewed-by: Tobias Rapp <t.rapp@noa-archive.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Besides being unnecessary it is also safer: If the error for an
unrecognized option were triggered (which seems to be impossible right
now), it might be that the stream whose codecpar is accessed is NULL.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Otherwise the rate emulation logic in `transcode_step` never gets
hit, and the unavailability flag never gets reset, leading to an
eternal loop with some rate emulation use cases.
This change was missed during the rework of ffmpeg.c, in which
encoder initialization was moved further down the time line in
commit 67be1ce0c6 . Previously,
as the encoder initialization had happened earlier, this state was
not possible (flow getting as far as hitting the rate emulation logic,
yet not having the encoder initialized yet).
Fixes#9160
Use AVSTREAM_EVENT_FLAG_NEW_PACKETS instead, which should provide the
same information in this case.
Finishes removing all uses of this field as started by 87f0c8280c.
Signed-off-by: James Almer <jamrial@gmail.com>
This also allows to exclusively use pointers to const AVCodec in
fftools/ffmpeg_opt.c.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Deprecated in c29038f304.
The resample filter based upon this library has been removed as well.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Signed-off-by: James Almer <jamrial@gmail.com>
The "packets_and_frames" element has been added to ffprobe.xsd in
0c9f0da0f7 but apparently removing the
check in ffprobe.c has been forgotten.
Signed-off-by: Tobias Rapp <t.rapp@noa-archive.com>
The MJPEG encoder supports some pixel format/color range combinations
only when strictness is set to unofficial or less. Before commit
059fc2d9da said encoder's pix_fmts array
only included the pixel formats supported with default strictness.
When strictness was <= unofficial, fftools/ffmpeg_filter.c used
an extended list of pixel formats instead of the encoder's including
the pixel formats only supported when strictness <= unofficial.
Said commit turned the logic around: The encoder's pix_fmts array now
included all pixel formats and fftools/ffmpeg_filter.c instead used
a small list of all pixel formats supported when strictness is >
unofficial and the encoder's pixel formats instead. In particular,
the codec's pix_fmt is not used when strictness is normal.
This works for the mjpeg encoder; yet it did not work for other
(hardware-based) mjpeg encoders, because the check for whether one is
using the MJPEG encoder is wrong: It just checks the codec id.
So if one used strict unofficial with a hardware-accelerated MJPEG
encoder before commit 059fc2d9da, the unofficial (non-hardware)
pixel formats of the MJPEG encoder would be used; since said commit
the codec's pixel formats are overridden at ordinary strictness
by the ordinary MJPEG pixel formats. This leads to format conversion
errors lateron which were reported in #9186.
The solution to this is to check AVCodec.name instead of its id.
Fixes ticket #9186.
Tested-by: Eoff, Ullysses A <ullysses.a.eoff@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Currently said list contains only the pixel formats that are always
supported irrespective of the range and the value of
strict_std_compliance. This makes the MJPEG encoder an outlier as all
other codecs put all potentially supported pixel formats into said list
and error out if the chosen pixel format is unsupported. This commit
brings it therefore in line with the other encoders.
The behaviour of fftools/ffmpeg_filter.c has been preserved. A more
informed decision would be possible if colour range were available
at this point, but it isn't.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Only one character is actually rewritten.
Fixes truncation warnings, such as
warning: ‘strncpy’ output truncated before terminating nul copying 3 bytes from a string of the same length [-Wstringop-truncation]
in gcc 10.2.0
This cap is currently used to mark multithreading-capable codecs that
wrap external libraries with their own multithreading code. The name is
highly confusing for our API users, since libavcodec ALWAYS handles
thread_count=0 (see commit message in previous commit). Therefore rename
the cap and update its documentation to make its meaning clear.
The old name is kept deprecated until next+1 major bump.
If the window is resized it was possible that xpos pointed outside the
visualization texture. By rearranging the overflow check we make sure this (and
a crash) does not happen.
We also don't have to use xleft for start position, as that is 0 anyways, and
if we ever want to take into account xleft then the texture should be
positioned accordingly when rendering.
Signed-off-by: Marton Balint <cus@passwd.hu>
The obstacle to do so was in filter_codec_opts: It uses searches
the AVCodec for options via the AV_OPT_SEARCH_FAKE_OBJ method, which
requires using a void * that points to a pointer to a const AVClass.
When using const AVCodec *, one can not simply use a pointer that points
to the AVCodec's pointer to its AVClass, as said pointer is const, too.
This is fixed by using a temporary pointer to the AVClass.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>