The field is a standard field, yet we were loading it as if it was
a quadword. This worked for forward transforms by chance, but broke
when the transform was inverse.
checkasm couldn't catch that because we only test forward transforms,
which are identical to inverse transforms but with a different revtab.
if input start time is not 0 -t is inaccurate doing stream copy,
will record extra duration according to input start time.
it should base on following cases:
input video start time from 60s, duration is 300s,
1. stream copy:
ffmpeg -ss 40 -t 60 -i in.mp4 -c copy -y out.mp4
open_input_file() will seek to 100 and set ts_offset to -100,
process_input() will offset pkt->pts with ts_offset to make it 0,
so when do_streamcopy() with -t, exits when ist->pts >= recording_time.
2. stream copy with -copyts:
ffmpeg -ss 40 -t 60 -copyts -i in.mp4 -c copy -y out.mp4
open_input_file() will seek to 100 and set ts_offset to 0,
process_input() will keep raw pkt->pts as ts_offset is 0,
so when do_streamcopy() with -t, exits when
ist->pts >= (recording_time+f->start_time+f->ctx->start_time).
3. stream copy with -copyts -start_at_zero:
ffmpeg -ss 40 -t 60 -copyts -start_at_zero -i in.mp4 -c copy -y out.mp4
open_input_file() will seek to 120 and set ts_offset to -60 as start_to_zero option,
process_input() will offset pkt->pts with input file start time,
so when do_streamcopy() with -t, exits when ist->pts >= (recording_time+f->start_time).
0 60 40 60 360
|_______|_____|_______|_______________________|
start -ss -t
This fixes ticket #9141.
Signed-off-by: Shiwang.Xie <shiwang.xie666@outlook.com>
av_shrink_packet() takes int size, so size must fit in int
Fixes: out of array access
Fixes: 35607/clusterfuzz-testcase-minimized-ffmpeg_dem_MXF_fuzzer-4875541323841536
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
xmllint (silently) replaces the ' with " when fixing and validating the output
of ffprobe in fate-ffprobe_xsd.
Reviewed-by: Tobias Rapp <t.rapp@noa-archive.com>
Signed-off-by: James Almer <jamrial@gmail.com>
When streaming mode is enabled, the DASH manifest is written on the
first packet for the segment so that the segment can be advertised
immediately to clients. It was also still writing the manifest at the
end of the segment leading to two duplicate writes.
Avoids empty "Channel" or "Overall" header lines added to log output
when measurement is restricted to one scope using
"measure_perchannel=none" or "measure_overall=none".
Signed-off-by: Tobias Rapp <t.rapp@noa-archive.com>
Adds schema validation for ffprobe XML output so that updating the
ffprobe.xsd file upon changes to ffprobe is not forgotten. This was
suggested by Marton Balint in:
http://ffmpeg.org/pipermail/ffmpeg-devel/2021-March/278428.html
The schema FATE test is only run if xmllint command is available.
Signed-off-by: Tobias Rapp <t.rapp@noa-archive.com>
Some encoders send GET_PARAMETER requests as a keep-alive mechanism.
If the client doesn't reply with an OK message, the encoder will close
the session. This was encountered with the impath i5110 encoder, when
the RTSP Keep-Alive checkbox is enabled under streaming settings.
Alternatively one may set the X-No-Keepalive: 1 header, but this is more
of a workaround. It's better practice to respond to an encoder's
keep-alive request, than disable the mechanism which may be manufacturer
specific.
Signed-off-by: Hayden Myers <hmyers@skylinenet.net>
Signed-off-by: Martin Storsjö <martin@martin.st>
We already require X264_BUILD >= 118, which includes an unconditional
definition of X264_CSP_BGR in itself, thus making this check
effectively always true.
This makes the libx264rgb check work when pkg-config is utilized
and x264.h is not part of the standard include path (as is often
with cross-compilation, or when you just have a custom prefix in
general in f.ex. your home directory).
The X264_BUILD >= 118 required by configure since 2011 should have
X264_CSP_BGR defined unconditionally (it was added a few X264_BUILD
updates earlier), but as 134cba728b
added this additional check, I have kept it for now.
After fixing AV_PKT_DATA_SKIP_SAMPLES for reading vorbis packets from ogg,
the actual decoded samples become fewer. Three fate tests are failing:
fate-vorbis-20:
The samples in 6.ogg are not frame aligned. 6.pcm file was generated by
ffmpeg before the fix. After the fix, the decoded pcm file does not match
anymore. Ideally the ref file 6.pcm should be updated but it is probably
not worth it including another copy of the same file, only smaller.
SIZE_TOLERANCE is added for this test case.
fate-webm-dash-chapters:
The original vorbis_chapter_extension_demo.ogg is transmuxed to dash-webm.
The ref file webm-dash-chapters needs to be updated.
fate-vorbis-encode:
This exposes another bug in the vorbis encoder that initial_padding is not
correctly set. It is fixed in the previous patch.
Signed-off-by: Guangyu Sun <gsun@roblox.com>
Vorbis has priming samples at the beginning. If the initial_padding is not
set in the encoder, the total sample count will be one frame fewer than it
should be. The result is that we get a truncated version of encoding.
initial_padding should be set to the frame_size used in
vorbis_encode_frame().
Signed-off-by: Guangyu Sun <gsun@roblox.com>
Without end_trimming, the last packet will contain unexpected samples used
for padding.
This commit partially fixes#6367 when the audio length is long enough.
dd if=/dev/zero of=./silence.raw count=20 bs=500
oggenc --raw silence.raw --output=silence.ogg
oggdec --raw --output silence.oggdec.raw silence.ogg
ffmpeg -codec:a libvorbis -i silence.ogg -f s16le -codec:a pcm_s16le silence.libvorbis.ffmpeg.raw
ffmpeg -i silence.ogg -f s16le -codec:a pcm_s16le silence.native.ffmpeg.raw
ls -l *.raw
The original test case in #6367 is still not fixed due to a remaining issue.
The remaining issue is that ogg_stream->private is not kept during
ogg_save()/ogg_restore(). Field final_duration in the private data is
important to calculate end_trimming.
Some common operations such as avformat_open_input() and
avformat_find_stream_info() before reading packet will trigger ogg_save()
and ogg_restore().
Luckily, final_duration will not get updated until the last ogg page. The
save/restore mentioned above will not change final_duration most of the
time. But if the audio length is short, those reads may be performed on
the last ogg page, causing trouble keeping the correct value of
final_duration. We probably need a more complicated patch to address this
issue.
Signed-off-by: Guangyu Sun <gsun@roblox.com>
Frame size of Opus stream was previously presumed here to be 960 samples
(20ms), however sizes of 120, 240, 480, 1920, and 2880 are also allowed.
It can also alter on a per-packet basis and even multiple frames may be
present in a single packet according to the specification, for the sake
of simplicity however, let us assume that this doesn't occur.
Because the mFramesPerPacket field, representing the number of samples
per packet in the ffmpeg terminilogy, is the key factor in calculating
packet durations and all that follows from that (index, bitrate, ...),
it is crucial to get right.
Therefore, if the packet size is not available ahead of time (as it is in
the case of Opus), calculate an average from the stream duration once we
know how many packets there are and update the filed in the header.
This commit adds handling for cases where an error may occur, clearing
the allocated memory resources.
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit rearranges the existing code to create a separate function
for the completion callback in execute_model_tf.
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit rearranges the existing code to create separate function
for filling request with execution data.
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit uses TFRequestItem and the existing sync execution
mechanism to use request-based execution. It will help in adding
async functionality to the TensorFlow backend later.
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit introduces a typedef TFInferRequest to store
execution parameters for a single call to the TensorFlow C API.
This typedef is used in the TFRequestItem.
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit uses the common TaskItem and InferenceItem typedefs
for execution in TensorFlow backend.
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>