ffmpeg_opt.c currently contains code for
- parsing the options provided on the command line
- opening and initializing input files based on these options
- opening and initializing output files based on these options
The code dealing with each of these is for the most part disjoint, so it
makes sense to move them to separate files. Beyond reducing the quite
considerable size of ffmpeg_opt.c, this will also allow exposing muxer
internals (currently private to ffmpeg_mux.c) to the initialization
code, thus removing the awkward separation currently in place.
This simplifies the code as there is no other place the error buffer
is needed, so the av_err2str helper macro can be used.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
av_err2str which is a wrapper for av_strerror already calls
strerror_r if available and if not has a fallback for the other
error codes that would be handled by that, so manually calling
strerror again if it fails is not necessary.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
av_err2str which is a wrapper for av_strerror already calls
strerror_r if available and if not has a fallback for the other
error codes that would be handled by that, so manually calling
strerror again if it fails is not necessary.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Currently it would essentially change the find_stream_info setting for
the file it was specified for and all following files, which is unusual
and somewhat unexpected behaviour for a per-file option and not even
documented to behave like this.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
It has been deprecated in favor of the aresample filter for almost 10
years.
Another thing this option can do is drop audio timestamps and have them
generated by the encoding code or the muxer, but
- for encoding, this can already be done with the setpts filter
- for muxing this should almost never be done as timestamp generation by
the muxer is deprecated, but people who really want to do this can use
the setts bitstream filter
av_display_rotation_get will return NAN when the display matrix is invalid,
which would end up printing NAN as an integer in the rotation field. This
is poor for multiple reasons:
* Users of ffprobe have no way of discerning "valid but ugly rotation from
display matrix" from "invalid display matrix".
* It can have unintended consequences on some platforms, such as Linux x86_64,
where NAN is equal to INT64_MIN, which, for example, when printed as JSON,
which uses floating point for all numbers, can end up as invalid JSON or wit
a number that cannot be reserialized as an integer at all.
Since NAN is av_display_rotation_get's error case, just print 0 (no rotation)
when that happens.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
There are two issues here. Firstly, the floating-point comparison
is always true. Seconly, the code depends on the default value of
min_hard_comp implicitly, which can be dangerous.
Partially fixes ticket 9859.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
For example, if the jpeg contains exif information
and the rotation direction is included in the exif,
the displaymatrix will be set on the side_data of the frame when decoding.
However, when ffplay is used to play the image,
only the side data in the stream will be determined.
It does not check whether the frame also contains rotation information,
causing it to play in the wrong direction
Reviewed-by: Zhao Zhili <zhilizhao@tencent.com>
Signed-off-by: Wang Yaqiang <wangyaqiang03@kuaishou.com>
It may be NULL, as is the case for D3D11VA_VLD.
Running "ffmpeg -h decoder=h264" on a Windows build
Before:
Decoder h264 [H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10]:
Supported hardware devices: dxva2 (null) d3d11va cuda
After:
Decoder h264 [H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10]:
Supported hardware devices: dxva2 d3d11va cuda
Signed-off-by: James Almer <jamrial@gmail.com>
This is designed to improve and unify error handling for
allocation failures for the many (often small) allocations that we have
in the fftools. These typically either don't return an error message
or an error message that is not really helpful to the user
and can be replaced by a generic error message without loss of
information.
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
update_video_stats() currently uses OutputStream.data_size to print the
total size of the encoded stream so far and the average bitrate.
However, that field is updated in the muxer thread, right before the
packet is sent to the muxer. Not only is this racy, but the numbers may
not match even if muxing was in the main thread due to bitstream
filters, filesize limiting, etc.
Introduce a new counter, data_size_enc, for total size of the packets
received from the encoder and use that in update_video_stats(). Rename
data_size to data_size_mux to indicate its semantics more clearly.
No synchronization is needed for data_size_mux, because it is only read
in the main thread in print_final_stats(), which runs after the muxer
threads are terminated.
It is either equal to OutputStream.enc_ctx->codec, or NULL when enc_ctx
is NULL. Replace the use of enc with enc_ctx->codec, or the equivalent
enc_ctx->codec_* fields where more convenient.
ost->enc is always non-NULL here, since
- this code is never called for streamcopy
- opening the output file will fail if an encoder cannot be found, so
filters are never initialized
This code cannot be triggered, since after 90944ee3ab opening the
output file will abort if an encoder cannot be found and streamcopy was
not explicitly requested.
It races with the demuxing thread. Instead, send the information along
with the demuxed packets.
Ideally, the code should stop using the stream-internal parsing
completely, but that requires considerably more effort.
Fixes races, e.g. in:
- fate-h264-brokensps-2580
- fate-h264-extradata-reload
- fate-iv8-demux
- fate-m4v-cfr
- fate-m4v
Don't silently replace it with the default layout for the amount of channels
from the requested layout.
Should fix ticket #9869
Signed-off-by: James Almer <jamrial@gmail.com>
c11fb46731 led to a regression whereby the return code for missing
input or input probe is overridden by writer close return code and
hence not conveyed in the exit code.
Use it instead of AVStream.codecpar in the main thread. While
AVStream.codecpar is documented to only be updated when the stream is
added or avformat_find_stream_info(), it is actually updated during
demuxing. Accessing it from a different thread then constitutes a race.
Ideally, some mechanism should eventually be provided for signalling
parameter updates to the user. Then the demuxing thread could pick up
the changes and propagate them to the decoder.
Discontinuity detection/correction is left in the main thread, as it is
entangled with InputStream.next_dts and related variables, which may be
set by decoding code.
Fixes races e.g. in fate-ffmpeg-streamloop after
aae9de0cb2.
This will allow to move normal offset handling to demuxer thread, since
discontinuities currently have to be processed in the main thread, as
the code uses some decoder-produced values.
InputFile.ts_offset can change during transcoding, due to discontinuity
correction. This should not affect the streamcopy starting timestamp.
Cf. bf2590aed3
-stream_loop is currently handled by destroying the demuxer thread,
seeking, then recreating it anew. This is very messy and conflicts with
the future goal of moving each major ffmpeg component into its own
thread.
Handle -stream_loop directly in the demuxer thread. Looping requires the
demuxer to know the duration of the file, which takes into account the
duration of the last decoded audio frame (if any). Use a thread message
queue to communicate this information from the main thread to the
demuxer thread.
This avoids a potential race with the demuxer adding new streams. It is
also more efficient, since we no longer do inter-thread transfers of
packets that will be just discarded.
This undocumented feature runtime-enables dumping input packets. I can
think of no reasonable real-world use case that cannot also be
accomplished in a different way. Keeping this functionality would
interfere with the following commit moving it to the input thread (then
setting the variable would require locking or atomics, which would be
unnecessarily complicated for a feature that probably nobody uses).
There are currently three possible modes for an output stream:
1) The stream is produced by encoding output from some filtergraph. This
is true when ost->enc_ctx != NULL, or equivalently when
ost->encoding_needed != 0.
2) The stream is produced by copying some input stream's packets. This
is true when ost->enc_ctx == NULL && ost->source_index >= 0.
3) The stream is produced by attaching some file directly. This is true
when ost->enc_ctx == NULL && ost->source_index < 0.
OutputStream.stream_copy is currently used to identify case 2), and
sometimes to confusingly (or even incorrectly) identify case 1). Remove
it, replacing its usage with checking enc_ctx/source_index values.
Usually a HW decoder is expected when user specifies a HW acceleration
method via -hwaccel option, however the current implementation doesn't
take HW acceleration method into account, it is possible to select a SW
decoder.
For example:
$ ffmpeg -hwaccel vaapi -i av1.mp4 -f null -
$ ffmpeg -hwaccel nvdec -i av1.mp4 -f null -
$ ffmpeg -hwaccel vdpau -i av1.mp4 -f null -
[...]
Stream #0:0 -> #0:0 (av1 (libdav1d) -> wrapped_avframe (native))
libdav1d is selected in this case even if vaapi, nvdec or vdpau is
specified.
After applying this patch, the native av1 decoder (with vaapi, nvdec or
vdpau support) is selected for decoding(libdav1d is still used for
probing format).
$ ffmpeg -hwaccel vaapi -i av1.mp4 -f null -
$ ffmpeg -hwaccel nvdec -i av1.mp4 -f null -
$ ffmpeg -hwaccel vdpau -i av1.mp4 -f null -
[...]
Stream #0:0 -> #0:0 (av1 (native) -> wrapped_avframe (native))
Tested-by: Mario Roy <marioeroy@gmail.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
After applying this patch, the desired HW acceleration method is known
before selecting decoder, so we may take HW acceleration method into
account when selecting decoder for input stream in the next commit
There should be no functional changes in this patch
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
The streamcopy initialization code briefly needs an AVCodecContext to
apply AVOptions to. Allocate a temporary codec context, do not use the
encoding one.
Using tail calls with functions returning void is forbidden
(C99/C11 6.8.6.4: "A return statement with an expression shall not appear
in a function whose return type is void.") GCC emits a warning
because of this when using -pedantic: "ISO C forbids ‘return’ with
expression, in function returning void"
Reviewed-by: Hendrik Leppkes <h.leppkes@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It retrieves the muxer's internal timestamp with under-defined
semantics. Continuing to use this value would also require
synchronization once the muxer is moved to a separate thread.
Replace the value with last_mux_dts.
This field means different things when the video is encoded (number of
frames emitted to the encoding sync queue/encoder by the video sync
code) or copied (number of packets sent to the muxer sync queue).
Print the value of packets_written instead, which means the same thing
in both cases. It is also more accurate, since packets may be dropped by
the sync queue or bitstream filters.
Same issues apply to it as to -shortest.
Changes the results of the following tests:
- matroska-flac-extradata-update
The test reencodes two input FLAC streams into three output FLAC
streams. The last output stream is limited to 8 frames. The current
code results in the first two output streams having 12 frames, after
this commit all three streams have 8 frames and are the same length.
This new result is better, since it is predictable.
- mkv-1242
The test streamcopies one video and one audio stream, video is limited
to 11 frames. The new result shortens the audio stream so that it is
not longer than the video.
The -shortest option (which finishes the output file at the time the
shortest stream ends) is currently implemented by faking the -t option
when an output stream ends. This approach is fragile, since it depends
on the frames/packets being processed in a specific order. E.g. there
are currently some situations in which the output file length will
depend unpredictably on unrelated factors like encoder delay. More
importantly, the present work aiming at splitting various ffmpeg
components into different threads will make this approach completely
unworkable, since the frames/packets will arrive in effectively random
order.
This commit introduces a "sync queue", which is essentially a collection
of FIFOs, one per stream. Frames/packets are submitted to these FIFOs
and are then released for further processing (encoding or muxing) when
it is ensured that the frame in question will not cause its stream to
get ahead of the other streams (the logic is similar to libavformat's
interleaving queue).
These sync queues are then used for encoding and/or muxing when the
-shortest option is specified.
A new option – -shortest_buf_duration – controls the maximum number of
queued packets, to avoid runaway memory usage.
This commit changes the results of the following tests:
- copy-shortest[12]: the last audio frame is now gone. This is
correct, since it actually outlasts the last video frame.
- shortest-sub: the video packets following the last subtitle packet are
now gone. This is also correct.
The following commits will add a new buffering stage after bitstream
filters, which should not be taken into account for choosing next
output.
OutputStream.last_mux_dts is also used by the muxing code to make up
missing DTS values - that field is now moved to the muxer-private
MuxStream object.
The current placement of this free is historical - it used to be
followed by avcodec_close(), since removed.
The proper place for freeing the stats is currently right before the
encoder context itself is freed.
It is currently called from two places:
- output_packet() in ffmpeg.c, which submits the newly available output
packet to the muxer
- from of_check_init() in ffmpeg_mux.c after the header has been
written, to flush the muxing queue
Some packets will thus be processed by this function twice, so it
requires an extra parameter to indicate the place it is called from and
avoid modifying some state twice.
This is fragile and hard to follow, so split this function into two.
Also rename of_write_packet() to of_submit_packet() to better reflect
its new purpose.
The muxing queue currently lives in OutputStream, which is a very large
struct storing the state for both encoding and muxing. The muxing queue
is only used by the code in ffmpeg_mux, so it makes sense to restrict it
to that file.
This makes the first step towards reducing the scope of OutputStream.
Figure out earlier whether the output stream/file should be bitexact and
store this information in a flag in OutputFile/OutputStream.
Stop accessing the muxer in set_encoder_id(), which will become
forbidden in future commits.
The current code postpones closing the files until after printing the
final report, which accesses the output file size. Deal with this by
storing the final file size before closing the file.
Move the file size checking code to ffmpeg_mux. Use the recently
introduced of_filesize(), making this code consistent with the size
shown by print_report().
Move header_written into it, which is not (and should not be) used by
any code outside of ffmpeg_mux.
In the future this context will contain more muxer-private state that
should not be visible to other code.
This is a per-file input option that adjusts an input's timestamps
with reference to another input, so that emitted packet timestamps
account for the difference between the start times of the two inputs.
Typical use case is to sync two or more live inputs such as from capture
devices. Both the target and reference input source timestamps should be
based on the same clock source.
If either input lacks starting timestamps, then no sync adjustment is made.
These packets need not be writable (and are not modified by us),
so it is best to access them via const uint8_t*.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
regression since 13350e81fd
Fix looking for .ffmpeg subfolder in FFMPEG_DATADIR and inversely not in HOME.
Fix search order (documentation).
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>