3477feb643
dnn_backend_native_layer_mathbinary: add floormod support
...
Signed-off-by: Mingyu Yin <mingyu.yin@intel.com >
2020-08-24 09:09:11 +08:00
4ed6bca4ae
dnn_backend_native_layer_mathunary: add round support
...
Signed-off-by: Mingyu Yin <mingyu.yin@intel.com >
Reviewed-by: Guo, Yejun <yejun.guo@intel.com >
2020-08-12 10:30:46 +08:00
fab00b0ae0
dnn_backend_native_layer_mathunary: add floor support
...
It can be tested with the model generated with below python script:
import tensorflow as tf
import os
import numpy as np
import imageio
from tensorflow.python.framework import graph_util
name = 'floor'
pb_file_path = os.getcwd()
if not os.path.exists(pb_file_path+'/{}_savemodel/'.format(name)):
os.mkdir(pb_file_path+'/{}_savemodel/'.format(name))
with tf.Session(graph=tf.Graph()) as sess:
in_img = imageio.imread('detection.jpg')
in_img = in_img.astype(np.float32)
in_data = in_img[np.newaxis, :]
input_x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
y_ = tf.math.floor(input_x*255)/255
y = tf.identity(y_, name='dnn_out')
sess.run(tf.global_variables_initializer())
constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
with tf.gfile.FastGFile(pb_file_path+'/{}_savemodel/model.pb'.format(name), mode='wb') as f:
f.write(constant_graph.SerializeToString())
print("model.pb generated, please in ffmpeg path use\n \n \
python tools/python/convert.py {}_savemodel/model.pb --outdir={}_savemodel/ \n \nto generate model.model\n".format(name,name))
output = sess.run(y, feed_dict={ input_x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))
print("To verify, please ffmpeg path use\n \n \
./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow -f framemd5 {}_savemodel/tensorflow_out.md5\n \
or\n \
./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow {}_savemodel/out_tensorflow.jpg\n \nto generate output result of tensorflow model\n".format(name, name, name, name))
print("To verify, please ffmpeg path use\n \n \
./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.model:input=dnn_in:output=dnn_out:dnn_backend=native -f framemd5 {}_savemodel/native_out.md5\n \
or \n \
./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.model:input=dnn_in:output=dnn_out:dnn_backend=native {}_savemodel/out_native.jpg\n \nto generate output result of native model\n".format(name, name, name, name))
Signed-off-by: Mingyu Yin <mingyu.yin@intel.com >
2020-08-07 10:34:22 +08:00
9fbdd5454b
dnn_backend_native_layer_mathunary: add ceil support
...
It can be tested with the model generated with below python script:
import tensorflow as tf
import os
import numpy as np
import imageio
from tensorflow.python.framework import graph_util
name = 'ceil'
pb_file_path = os.getcwd()
if not os.path.exists(pb_file_path+'/{}_savemodel/'.format(name)):
os.mkdir(pb_file_path+'/{}_savemodel/'.format(name))
with tf.Session(graph=tf.Graph()) as sess:
in_img = imageio.imread('detection.jpg')
in_img = in_img.astype(np.float32)
in_data = in_img[np.newaxis, :]
input_x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
y = tf.math.ceil( input_x, name='dnn_out')
sess.run(tf.global_variables_initializer())
constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
with tf.gfile.FastGFile(pb_file_path+'/{}_savemodel/model.pb'.format(name), mode='wb') as f:
f.write(constant_graph.SerializeToString())
print("model.pb generated, please in ffmpeg path use\n \n \
python tools/python/convert.py ceil_savemodel/model.pb --outdir=ceil_savemodel/ \n \n \
to generate model.model\n")
output = sess.run(y, feed_dict={ input_x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))
print("To verify, please ffmpeg path use\n \n \
./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model=ceil_savemodel/model.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow -f framemd5 ceil_savemodel/tensorflow_out.md5\n \n \
to generate output result of tensorflow model\n")
print("To verify, please ffmpeg path use\n \n \
./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model=ceil_savemodel/model.model:input=dnn_in:output=dnn_out:dnn_backend=native -f framemd5 ceil_savemodel/native_out.md5\n \n \
to generate output result of native model\n")
Signed-off-by: Mingyu Yin <mingyu.yin@intel.com >
Reviewed-by: Guo, Yejun <yejun.guo@intel.com >
2020-08-04 19:56:54 +08:00
c0cdeea0ee
dnn_backend_native_layer_mathunary: add atanh support
...
It can be tested with the model generated with below python script:
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
please uncomment the part you want to test
x_sinh_1 = tf.sinh(x)
x_out = tf.divide(x_sinh_1, 1.176) # sinh(1.0)
x_cosh_1 = tf.cosh(x)
x_out = tf.divide(x_cosh_1, 1.55) # cosh(1.0)
x_tanh_1 = tf.tanh(x)
x__out = tf.divide(x_tanh_1, 0.77) # tanh(1.0)
x_asinh_1 = tf.asinh(x)
x_out = tf.divide(x_asinh_1, 0.89) # asinh(1.0/1.1)
x_acosh_1 = tf.add(x, 1.1)
x_acosh_2 = tf.acosh(x_acosh_1) # accept (1, inf)
x_out = tf.divide(x_acosh_2, 1.4) # acosh(2.1)
x_atanh_1 = tf.divide(x, 1.1)
x_atanh_2 = tf.atanh(x_atanh_1) # accept (-1, 1)
x_out = tf.divide(x_atanh_2, 1.55) # atanhh(1.0/1.1)
y = tf.identity(x_out, name='dnn_out') #please only preserve the x_out you want to test
sess=tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)
print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")
output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))
Signed-off-by: Ting Fu <ting.fu@intel.com >
2020-07-06 12:45:14 +08:00
cd2e3a864d
dnn_backend_native_layer_mathunary: add acosh support
...
Signed-off-by: Ting Fu <ting.fu@intel.com >
2020-07-06 12:45:14 +08:00
9d14b38d9d
dnn_backend_native_layer_mathunary: add asinh support
...
Signed-off-by: Ting Fu <ting.fu@intel.com >
2020-07-06 12:45:14 +08:00
ea71e731f4
dnn_backend_native_layer_mathunary: add tanh support
...
Signed-off-by: Ting Fu <ting.fu@intel.com >
2020-07-06 12:45:14 +08:00
62fc7e3035
dnn_backend_native_layer_mathunary: add cosh support
...
Signed-off-by: Ting Fu <ting.fu@intel.com >
2020-07-06 12:45:14 +08:00
91b4037101
dnn_backend_native_layer_mathunary: add sinh support
...
Signed-off-by: Ting Fu <ting.fu@intel.com >
2020-07-06 12:45:14 +08:00
13f5613e68
dnn_backend_native_layer_mathunary: add atan support
...
It can be tested with the model generated with below python script:
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.atan(x)
x2 = tf.divide(x1, 3.1416/4) # pi/4
y = tf.identity(x2, name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)
print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")
output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))
Signed-off-by: Ting Fu <ting.fu@intel.com >
Signed-off-by: Guo Yejun <yejun.guo@intel.com >
2020-06-25 08:41:50 +08:00
461485feac
dnn_backend_native_layer_mathunary: add acos support
...
It can be tested with the model generated with below python script:
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.acos(x)
x2 = tf.divide(x1, 3.1416/2) # pi/2
y = tf.identity(x2, name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)
print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")
output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))
Signed-off-by: Ting Fu <ting.fu@intel.com >
Signed-off-by: Guo Yejun <yejun.guo@intel.com >
2020-06-25 08:41:50 +08:00
486c0c419d
dnn_backend_native_layer_mathunary: add asin support
...
It can be tested with the model generated with below python script:
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.asin(x)
x2 = tf.divide(x1, 3.1416/2) # pi/2
y = tf.identity(x2, name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)
print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")
output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))
Signed-off-by: Ting Fu <ting.fu@intel.com >
Signed-off-by: Guo Yejun <yejun.guo@intel.com >
2020-06-25 08:41:50 +08:00
22d0860c13
dnn_backend_native_layer_mathunary: add tan support
...
It can be tested with the model generated with below python scripy
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.multiply(x, 0.78)
x2 = tf.tan(x1)
y = tf.identity(x2, name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)
print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")
output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))
Signed-off-by: Ting Fu <ting.fu@intel.com >
Signed-off-by: Guo Yejun <yejun.guo@intel.com >
2020-06-11 11:10:51 +08:00
88fb494f42
dnn_backend_native_layer_mathunary: add cos support
...
It can be tested with the model generated with below python scripy
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.multiply(x, 1.5)
x2 = tf.cos(x1)
y = tf.identity(x2, name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)
print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")
output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))
Signed-off-by: Ting Fu <ting.fu@intel.com >
Signed-off-by: Guo Yejun <yejun.guo@intel.com >
2020-06-11 11:10:51 +08:00
0b6d3f0d83
dnn_backend_native_layer_mathunary: add sin support
...
It can be tested with the model file generated with below python scripy:
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.multiply(x, 3.14)
x2 = tf.sin(x1)
y = tf.identity(x2, name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)
print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")
output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))
Signed-off-by: Ting Fu <ting.fu@intel.com >
Signed-off-by: Guo Yejun <yejun.guo@intel.com >
2020-06-11 11:10:51 +08:00
f73cc61bf5
dnn_backend_native_layer_mathunary: add abs support
...
more math unary operations will be added here
It can be tested with the model file generated with below python scripy:
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.subtract(x, 0.5)
x2 = tf.abs(x1)
y = tf.identity(x2, name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)
print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")
output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))
Signed-off-by: Ting Fu <ting.fu@intel.com >
Signed-off-by: Guo, Yejun <yejun.guo@intel.com >
2020-05-28 11:04:21 +08:00
71e28c5422
dnn/native: add native support for minimum
...
it can be tested with model file generated with below python script:
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('input.jpg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.minimum(0.7, x)
x2 = tf.maximum(x1, 0.4)
y = tf.identity(x2, name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)
print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")
output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))
Signed-off-by: Guo, Yejun <yejun.guo@intel.com >
2020-05-08 15:22:27 +08:00
8ce9d88f93
dnn/native: add native support for divide
...
it can be tested with model file generated with below python script:
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('input.jpg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
z1 = 2 / x
z2 = 1 / z1
z3 = z2 / 0.25 + 0.3
z4 = z3 - x * 1.5 - 0.3
y = tf.identity(z4, name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)
print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")
output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))
Signed-off-by: Guo, Yejun <yejun.guo@intel.com >
2020-04-22 13:15:00 +08:00
ef79408e97
dnn/native: add native support for 'mul'
...
it can be tested with model file generated from above python script:
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('input.jpg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
z1 = 0.5 + 0.3 * x
z2 = z1 * 4
z3 = z2 - x - 2.0
y = tf.identity(z3, name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)
print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")
output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))
Signed-off-by: Guo, Yejun <yejun.guo@intel.com >
2020-04-22 13:14:47 +08:00
6aa7e07e7c
dnn/native: add native support for 'add'
...
It can be tested with the model file generated with below python script:
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('input.jpg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
z1 = 0.039 + x
z2 = x + 0.042
z3 = z1 + z2
z4 = z3 - 0.381
z5 = z4 - x
y = tf.math.maximum(z5, 0.0, name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)
print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")
output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))
Signed-off-by: Guo, Yejun <yejun.guo@intel.com >
2020-04-22 13:14:30 +08:00
ffa1561608
dnn_backend_native_layer_mathbinary: add sub support
...
more math binary operations will be added here
Signed-off-by: Guo, Yejun <yejun.guo@intel.com >
2020-04-07 11:04:34 +08:00
dff39ea9f0
dnn: add tf.nn.conv2d support for native model
...
Unlike other tf.*.conv2d layers, tf.nn.conv2d does not create many
nodes (within a scope) in the graph, it just acts like other layers.
tf.nn.conv2d only creates one node in the graph, and no internal
nodes such as 'kernel' are created.
The format of native model file is also changed, a flag named
has_bias is added, so change the version number.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com >
Signed-off-by: Pedro Arthur <bygrandao@gmail.com >
2019-10-30 10:31:55 -03:00
b2683c66b2
libavfilter/dnn: add layer maximum for native mode.
...
The reason to add this layer is that it is used by srcnn in vf_sr.
This layer is currently ignored in native mode. After this patch,
we can add multiple outputs support for native mode.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com >
Signed-off-by: Pedro Arthur <bygrandao@gmail.com >
2019-09-20 10:57:18 -03:00
022f50d3fe
libavfilter/dnn: add header into native model file
...
Signed-off-by: Guo, Yejun <yejun.guo@intel.com >
Signed-off-by: Pedro Arthur <bygrandao@gmail.com >
2019-09-04 11:13:21 -03:00