There's one task item for one function call from dnn interface,
there's one request item for one call to openvino. For classify,
one task might need multiple inference for classification on every
bounding box, so add InferenceItem.
please use tools/python/tf_sess_config.py to get the sess_config after that.
note the byte order of session config is in normal order.
bump the MICRO version for the config change.
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
It can't; these are just remnants of commit
3c7cad69f2 which let the worker threads
do the reallocation.
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
If an error happens when preparing the output data buffer, an already
allocated array would leak. Fix this by postponing its allocation.
Fixes Coverity issue #1473531.
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Also fixes a memleak in single-threaded mode when an error happens
in preparing the output data buffer; and also removes an unchecked
allocation.
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
asserts should not be used instead of ordinary input checks.
Yet the native DNN backend did it: get_input_native() asserted that
the first dimension was one, despite this value coming directly from
the input file without having been sanitized.
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
So the backend knows the usage of model is for frame processing,
detect, classify, etc. Each function type has different behavior
in backend when handling the input/output data of the model.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
once we mark done for the task in function infer_completion_callback,
the task is possible to be release in function ff_dnn_get_async_result_ov
in another thread just after it, so we need to record request queue
first, instead of using task->ov_model->request_queue later.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
from proc_from_frame_to_dnn to ff_proc_from_frame_to_dnn, and
from proc_from_dnn_to_frame to ff_proc_from_dnn_to_frame.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
The OpenVINO model file format changes when OpenVINO goes to a new
release, it does not work if the versions between model file and
runtime are mismatched.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
OpenVINO APIs require specify input size to run the model, while some
OpenVINO model does accept different input size. To enable this feature
adding input_resizable option here for easier use.
Setting bool variable input_resizable to specify if the input can be resizable or not.
input_resizable = 1 means support input resize, aka accept different input size.
input_resizable = 0 (default) means do not support input resize.
Please make sure the inference model does accept different input size
before use this option, otherwise the inference engine may report error(s).
eg: ./ffmpeg -i video_name.mp4 -vf dnn_processing=dnn_backend=openvino:\
model=model_name.xml:input=input_name:output=output_name:\
options=device=CPU\&input_resizable=1 -y output_video_name.mp4
Signed-off-by: Ting Fu <ting.fu@intel.com>