Up until now the EBML Header length field has been written with eight
bytes, although the EBML Header is always so small that only one byte
is needed for it. This patch saves seven bytes for every Matroska/Webm
file.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
The upper bounds currently used for determining the size of a CuePoint's
length field can be improved somewhat; as a result, a CuePoint
containing three CueTrackPositions will now only need a size field
with one byte length.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
The earlier code included the size of the BlockGroup's length field and
the EBML ID in the calculation of the size for the payload and ignored
the size of the duration's length field. This meant that Blockgroups
corresponding to packets with size 2^(7n) - 17 - n - i, i = 0,..., n - 1,
n = 1,..., 8 (i.e. 110, 16364, 16365, 2097130..2097132, ...) were written
with length fields that are unnecessarily long.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
At this point, ts already includes the ts_offset so that the relative
time written with the cluster is already given by ts - mkv->cluster_pts.
It is this number that needs to fit into an int16_t.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
currently, only float is supported as model input, actually, there
are other data types, this patch adds uint8.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
some models such as ssd, yolo have more than one output.
the clean up code in this patch is a little complex, it is because
that set_input_output_tf could be called for many times together
with ff_dnn_execute_model_tf, we have to clean resources for the
case that the two interfaces are called interleaved.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
Currently, within interface set_input_output, the dims/memory of the tensorflow
dnn model output is determined by executing the model with zero input,
actually, the output dims might vary with different input data for networks
such as object detection models faster-rcnn, ssd and yolo.
This patch moves the logic from set_input_output to execute_model which
is suitable for all the cases. Since interface changed, and so dnn_backend_native
also changes.
In vf_sr.c, it knows it's srcnn or espcn by executing the model with zero input,
so execute_model has to be called in function config_props
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
remove the requirment that the name of DNN model input/output
should be "x"/"y",
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
remove 'else' since there is always 'return' in 'if' scope,
so the code will be clean for later maintenance
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
otherwise, the following check will return error if layer_add_res
is randomly initialized.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
Adding the support to build FFMPEG with HW accelerated decode and encode on PPC64
little endian architecture.
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
Cuvid supports clips with a limit on maximum number of macroblocks.
This check was missing after cuvidGetDecoderCaps API call allowing
unsupported clips to proceed.
Added the missing check, same as the one in hwaccel nvdec implementation.
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
Instead of doing each column one by one, doing several columns
together gives about 30% better performance.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Ruiling Song <ruiling.song@intel.com>
Currently profile mapping is hard-coded, and not flexible to do extactly
map (E.g: libmfx treats H264 constrained baseline to be baseline profile).
vaapi profile mapping funtion provides a better soultion than current
qsv mapping.
Signed-off-by: Zhong Li <zhong.li@intel.com>
It is helpful to know why some clips decoding failed.
Ticket#7330 is a good example, with this patch it is easily to
know bitstream codec level is out of support range.
Signed-off-by: Zhong Li <zhong.li@intel.com>
Reference: Table 8: Interpretation of valid BITPIX value from FITS standard 4.0
Fixes: runtime error: division by zero
Fixes: 14581/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_FITS_fuzzer-5652382425284608
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
10 bytes (id3v2 header amount of bytes) were being read before any checks
were made on the bitstream. The result was that we were overreading into
the next frame if the current one was 8 or 9 bytes long.
Fixes tickets #7271 and #7869.
Signed-off-by: James Almer <jamrial@gmail.com>
The latest generation video decoder on the Turing chips supports
decoding HEVC 4:4:4. This change adds AV_PIX_FMT_VDPAU as a valid format
for HEVC 4:4:4 8 bit.
Pass SPS, PPS range extensions to VDPAU layer via
VdpPictureInfoHEVC444. Added VdpPictureInfoHEVC444 struct to
VdpPictureInfo union to populate the range extension params. Mapped
FF_PROFILE_HEVC_REXT to VDP_DECODER_PROFILE_HEVC_MAIN_444.
New VdpYCbCr Formats VDP_YCBCR_FORMAT_Y_U_V_444 and,
VDP_YCBCR_FORMAT_Y_UV_444 have been added in VDPAU with libvdpau-1.2
to be used in get/putbits for YUV 4:4:4 surfaces. Earlier mapping of
AV_PIX_FMT_YUV444P to VDP_YCBCR_FORMAT_YV12 is not valid.
Hence this Change maps AV_PIX_FMT_YUV444P to VDP_YCBCR_FORMAT_Y_U_V_444
to access the YUV 4:4:4 surface via read-back API's of VDPAU.
Apparently in the new SDK one cannot query if VANC output is supported, so we
will fall back to non-VANC output if enabling the video output with VANC fails.
Fixes ticket #7867.
Signed-off-by: Marton Balint <cus@passwd.hu>
Fixes: Timeout (11sec -> 5sec)
Fixes: 14473/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_JV_fuzzer-5761630857592832
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Peter Ross <pross@xvid.org>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: Assertion failure
Fixes: 14484/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_PGMYUV_fuzzer-5150016408125440
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This commit was merged in a couple years ago as a no-op because we
had already switched from GetProcAddress to dlsym some time before
that. However, not applying the actual cast causes warnings about
FARPROC and when attempting to build FFmpeg in MSVC with AviSynth-GCC
32-bit compatibility, those FARPROC warnings turn into FARPROC errors.