GPU copy enables or disables GPU accelerated copying between video
and system memory. This may lead to a notable performance improvement.
Memory must be sequent and aligned with 128x64.
CMD:
ffmpeg -init_hw_device qsv=hw -filter_hw_device hw -c:v h264_qsv
-gpu_copy on -i input.h264 -f null -
or:
ffmpeg -c:v h264_qsv -gpu_copy on -i input.h264 -f null -
Signed-off-by: Linjie Fu <linjie.fu@intel.com>
Signed-off-by: ChaoX A Liu <chaox.a.liu@intel.com>
Signed-off-by: Zhong Li <zhong.li@intel.com>
There is no change in the encoded bitstream, but this
ensures that the written field length is consistent
with the reference implementation.
Unused bytes are zeroed out for backwards compatibility.
Signed-off-by: Raphaël Zumer <rzumer@tebako.net>
the data_start is after reading 12 bytes and if its subtracted
at the very end the intermediate might overflow
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This seeks to the position the previous call to dxv_decompress_opcodes()
positioned us in case of success
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
made with persistent connections to prevent incorrect reset of
offset when demuxing HLS+FMP4
Signed-off-by: Steven Liu <lq@onvideo.cn>
Signed-off-by: vectronic <hello.vectronic@gmail.com>
add ff_http_do_new_request2() which supports options to be applied to
HTTPContext after initialisation with the new uri
Signed-off-by: Steven Liu <lq@onvideo.cn>
Signed-off-by: vectronic <hello.vectronic@gmail.com>
Linux and OSX systems support basename and dirname via <libgen.h>, I plan to
make the wrapper interface conform to the standard interface first.
If it is feasible, I will continue to modify it to call the system interface
if there is already a system call interface.
You can get more description about the system interface by below command:
"man 3 basename"
Reviewed-by: Marton Balint <cus@passwd.hu>
Reviewed-by: Tomas Härdin <tjoppen@acc.umu.se>
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
Current default is 200kbps, which produces inconsistent
results (too high for low-res, too low for hi-res). Use
CRF instead, which will adapt. Affects VP9. Also have
VP8 use a default bitrate of 256kbps.
Signed-off-by: James Zern <jzern@google.com>
When flushing, MAX_FRAME_HEADER_SIZE bytes (always zero) are supposed to
be written to the fifo buffer in order to be able to check the rest of
the buffer for frame headers. It was intended to write these by writing
a small buffer of size MAX_FRAME_HEADER_SIZE to the buffer. But the way
it was actually done ensured that this did not happen:
First, it would be checked whether the size of the input buffer was zero,
in which case it buf_size would be set to MAX_FRAME_HEADER_SIZE and
read_end would be set to indicate that MAX_FRAME_HEADER_SIZE bytes need
to be written. Then it would be made sure that there is enough space in
the fifo for the data to be written. Afterwards the data is written. The
check used here is for whether buf_size is zero or not. But if it was
zero initially, it is MAX_FRAME_HEADER_SIZE now, so that not the
designated buffer for writing MAX_FRAME_HEADER_SIZE is written; instead
the padded buffer (from the stack of av_parser_parse2()) is used. This
works because AV_INPUT_BUFFER_PADDING_SIZE >= MAX_FRAME_HEADER_SIZE.
Lateron, buf_size is set to zero again.
Given that since 7edbd536, the actual amount of data read is no longer
automatically equal to buf_size, it is completely unnecessary to modify
buf_size at all. Moreover, modifying it is dangerous: Some allocations
can fail and because buf_size is never reset to zero in this codepath,
the parser might return a value > 0 on flushing.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>