1
0
mirror of https://github.com/FFmpeg/FFmpeg.git synced 2024-12-12 19:18:44 +02:00
Commit Graph

34 Commits

Author SHA1 Message Date
Mingyu Yin
ad2546e3b3 dnn/native: add native support for dense
Signed-off-by: Mingyu Yin <mingyu.yin@intel.com>
2020-09-29 14:19:55 +08:00
Mingyu Yin
3477feb643 dnn_backend_native_layer_mathbinary: add floormod support
Signed-off-by: Mingyu Yin <mingyu.yin@intel.com>
2020-08-24 09:09:11 +08:00
Mingyu Yin
4ed6bca4ae dnn_backend_native_layer_mathunary: add round support
Signed-off-by: Mingyu Yin <mingyu.yin@intel.com>
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
2020-08-12 10:30:46 +08:00
Ting Fu
91efc41a69 dnn/native: add native support for avg_pool
Not support pooling strides in channel dimension yet.

Signed-off-by: Ting Fu <ting.fu@intel.com>
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
2020-08-10 16:37:39 +08:00
Mingyu Yin
fab00b0ae0 dnn_backend_native_layer_mathunary: add floor support
It can be tested with the model generated with below python script:

import tensorflow as tf
import os
import numpy as np
import imageio
from tensorflow.python.framework import graph_util
name = 'floor'

pb_file_path = os.getcwd()
if not os.path.exists(pb_file_path+'/{}_savemodel/'.format(name)):
    os.mkdir(pb_file_path+'/{}_savemodel/'.format(name))

with tf.Session(graph=tf.Graph()) as sess:
    in_img = imageio.imread('detection.jpg')
    in_img = in_img.astype(np.float32)
    in_data = in_img[np.newaxis, :]
    input_x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
    y_ = tf.math.floor(input_x*255)/255
    y = tf.identity(y_, name='dnn_out')
    sess.run(tf.global_variables_initializer())
    constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])

    with tf.gfile.FastGFile(pb_file_path+'/{}_savemodel/model.pb'.format(name), mode='wb') as f:
        f.write(constant_graph.SerializeToString())

    print("model.pb generated, please in ffmpeg path use\n \n \
    python tools/python/convert.py {}_savemodel/model.pb --outdir={}_savemodel/ \n \nto generate model.model\n".format(name,name))

    output = sess.run(y, feed_dict={ input_x: in_data})
    imageio.imsave("out.jpg", np.squeeze(output))

    print("To verify, please ffmpeg path use\n \n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow -f framemd5 {}_savemodel/tensorflow_out.md5\n  \
    or\n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow {}_savemodel/out_tensorflow.jpg\n \nto generate output result of tensorflow model\n".format(name, name, name, name))

    print("To verify, please ffmpeg path use\n \n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.model:input=dnn_in:output=dnn_out:dnn_backend=native -f framemd5 {}_savemodel/native_out.md5\n  \
    or \n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.model:input=dnn_in:output=dnn_out:dnn_backend=native {}_savemodel/out_native.jpg\n \nto generate output result of native model\n".format(name, name, name, name))

Signed-off-by: Mingyu Yin <mingyu.yin@intel.com>
2020-08-07 10:34:22 +08:00
Mingyu Yin
9fbdd5454b dnn_backend_native_layer_mathunary: add ceil support
It can be tested with the model generated with below python script:

import tensorflow as tf
import os
import numpy as np
import imageio
from tensorflow.python.framework import graph_util
name = 'ceil'

pb_file_path = os.getcwd()
if not os.path.exists(pb_file_path+'/{}_savemodel/'.format(name)):
    os.mkdir(pb_file_path+'/{}_savemodel/'.format(name))

with tf.Session(graph=tf.Graph()) as sess:
    in_img = imageio.imread('detection.jpg')
    in_img = in_img.astype(np.float32)
    in_data = in_img[np.newaxis, :]
    input_x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
    y = tf.math.ceil( input_x, name='dnn_out')
    sess.run(tf.global_variables_initializer())
    constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])

    with tf.gfile.FastGFile(pb_file_path+'/{}_savemodel/model.pb'.format(name), mode='wb') as f:
        f.write(constant_graph.SerializeToString())

    print("model.pb generated, please in ffmpeg path use\n \n \
    python tools/python/convert.py ceil_savemodel/model.pb --outdir=ceil_savemodel/ \n \n \
    to generate model.model\n")

    output = sess.run(y, feed_dict={ input_x: in_data})
    imageio.imsave("out.jpg", np.squeeze(output))

    print("To verify, please ffmpeg path use\n \n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model=ceil_savemodel/model.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow -f framemd5 ceil_savemodel/tensorflow_out.md5\n \n \
    to generate output result of tensorflow model\n")

    print("To verify, please ffmpeg path use\n \n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model=ceil_savemodel/model.model:input=dnn_in:output=dnn_out:dnn_backend=native -f framemd5 ceil_savemodel/native_out.md5\n \n \
    to generate output result of native model\n")

Signed-off-by: Mingyu Yin <mingyu.yin@intel.com>
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
2020-08-04 19:56:54 +08:00
Ting Fu
c0cdeea0ee dnn_backend_native_layer_mathunary: add atanh support
It can be tested with the model generated with below python script:

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')

please uncomment the part you want to test

x_sinh_1 = tf.sinh(x)
x_out = tf.divide(x_sinh_1, 1.176) # sinh(1.0)

x_cosh_1 = tf.cosh(x)
x_out = tf.divide(x_cosh_1, 1.55) # cosh(1.0)

x_tanh_1 = tf.tanh(x)
x__out = tf.divide(x_tanh_1, 0.77) # tanh(1.0)

x_asinh_1 = tf.asinh(x)
x_out = tf.divide(x_asinh_1, 0.89) # asinh(1.0/1.1)

x_acosh_1 = tf.add(x, 1.1)
x_acosh_2 = tf.acosh(x_acosh_1) # accept (1, inf)
x_out = tf.divide(x_acosh_2, 1.4) # acosh(2.1)

x_atanh_1 = tf.divide(x, 1.1)
x_atanh_2 = tf.atanh(x_atanh_1) # accept (-1, 1)
x_out = tf.divide(x_atanh_2, 1.55) # atanhh(1.0/1.1)

y = tf.identity(x_out, name='dnn_out') #please only preserve the x_out you want to test

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Ting Fu <ting.fu@intel.com>
2020-07-06 12:45:14 +08:00
Ting Fu
cd2e3a864d dnn_backend_native_layer_mathunary: add acosh support
Signed-off-by: Ting Fu <ting.fu@intel.com>
2020-07-06 12:45:14 +08:00
Ting Fu
9d14b38d9d dnn_backend_native_layer_mathunary: add asinh support
Signed-off-by: Ting Fu <ting.fu@intel.com>
2020-07-06 12:45:14 +08:00
Ting Fu
ea71e731f4 dnn_backend_native_layer_mathunary: add tanh support
Signed-off-by: Ting Fu <ting.fu@intel.com>
2020-07-06 12:45:14 +08:00
Ting Fu
62fc7e3035 dnn_backend_native_layer_mathunary: add cosh support
Signed-off-by: Ting Fu <ting.fu@intel.com>
2020-07-06 12:45:14 +08:00
Ting Fu
91b4037101 dnn_backend_native_layer_mathunary: add sinh support
Signed-off-by: Ting Fu <ting.fu@intel.com>
2020-07-06 12:45:14 +08:00
Ting Fu
13f5613e68 dnn_backend_native_layer_mathunary: add atan support
It can be tested with the model generated with below python script:

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.atan(x)
x2 = tf.divide(x1, 3.1416/4) # pi/4
y = tf.identity(x2, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Guo Yejun <yejun.guo@intel.com>
2020-06-25 08:41:50 +08:00
Ting Fu
461485feac dnn_backend_native_layer_mathunary: add acos support
It can be tested with the model generated with below python script:

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.acos(x)
x2 = tf.divide(x1, 3.1416/2) # pi/2
y = tf.identity(x2, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Guo Yejun <yejun.guo@intel.com>
2020-06-25 08:41:50 +08:00
Ting Fu
486c0c419d dnn_backend_native_layer_mathunary: add asin support
It can be tested with the model generated with below python script:

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.asin(x)
x2 = tf.divide(x1, 3.1416/2) # pi/2
y = tf.identity(x2, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Guo Yejun <yejun.guo@intel.com>
2020-06-25 08:41:50 +08:00
Ting Fu
22d0860c13 dnn_backend_native_layer_mathunary: add tan support
It can be tested with the model generated with below python scripy

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.multiply(x, 0.78)
x2 = tf.tan(x1)
y = tf.identity(x2, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Guo Yejun <yejun.guo@intel.com>
2020-06-11 11:10:51 +08:00
Ting Fu
88fb494f42 dnn_backend_native_layer_mathunary: add cos support
It can be tested with the model generated with below python scripy

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.multiply(x, 1.5)
x2 = tf.cos(x1)
y = tf.identity(x2, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Guo Yejun <yejun.guo@intel.com>
2020-06-11 11:10:51 +08:00
Ting Fu
0b6d3f0d83 dnn_backend_native_layer_mathunary: add sin support
It can be tested with the model file generated with below python scripy:

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.multiply(x, 3.14)
x2 = tf.sin(x1)
y = tf.identity(x2, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Guo Yejun <yejun.guo@intel.com>
2020-06-11 11:10:51 +08:00
Ting Fu
f73cc61bf5 dnn_backend_native_layer_mathunary: add abs support
more math unary operations will be added here

It can be tested with the model file generated with below python scripy:

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.subtract(x, 0.5)
x2 = tf.abs(x1)
y = tf.identity(x2, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-05-28 11:04:21 +08:00
Guo, Yejun
71e28c5422 dnn/native: add native support for minimum
it can be tested with model file generated with below python script:
import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.minimum(0.7, x)
x2 = tf.maximum(x1, 0.4)
y = tf.identity(x2, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-05-08 15:22:27 +08:00
Guo, Yejun
8ce9d88f93 dnn/native: add native support for divide
it can be tested with model file generated with below python script:
import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
z1 = 2 / x
z2 = 1 / z1
z3 = z2 / 0.25 + 0.3
z4 = z3 - x * 1.5 - 0.3
y = tf.identity(z4, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-04-22 13:15:00 +08:00
Guo, Yejun
ef79408e97 dnn/native: add native support for 'mul'
it can be tested with model file generated from above python script:

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
z1 = 0.5 + 0.3 * x
z2 = z1 * 4
z3 = z2 - x - 2.0
y = tf.identity(z3, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-04-22 13:14:47 +08:00
Guo, Yejun
6aa7e07e7c dnn/native: add native support for 'add'
It can be tested with the model file generated with below python script:

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
z1 = 0.039 + x
z2 = x + 0.042
z3 = z1 + z2
z4 = z3 - 0.381
z5 = z4 - x
y = tf.math.maximum(z5, 0.0, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-04-22 13:14:30 +08:00
Guo, Yejun
ffa1561608 dnn_backend_native_layer_mathbinary: add sub support
more math binary operations will be added here

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-04-07 11:04:34 +08:00
Guo, Yejun
e52070e89c convert_from_tensorflow.py: add support when kernel size is 1*1 with one input/output channel (gray image)
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
2019-12-13 11:41:10 -03:00
Guo, Yejun
dff39ea9f0 dnn: add tf.nn.conv2d support for native model
Unlike other tf.*.conv2d layers, tf.nn.conv2d does not create many
nodes (within a scope) in the graph, it just acts like other layers.
tf.nn.conv2d only creates one node in the graph, and no internal
nodes such as 'kernel' are created.

The format of native model file is also changed, a flag named
has_bias is added, so change the version number.

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
2019-10-30 10:31:55 -03:00
Guo, Yejun
b2683c66b2 libavfilter/dnn: add layer maximum for native mode.
The reason to add this layer is that it is used by srcnn in vf_sr.
This layer is currently ignored in native mode. After this patch,
we can add multiple outputs support for native mode.

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
2019-09-20 10:57:18 -03:00
Guo, Yejun
022f50d3fe libavfilter/dnn: add header into native model file
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
2019-09-04 11:13:21 -03:00
Guo, Yejun
83e0b71f66 dnn: export operand info in python script and load in c code
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
2019-08-30 11:41:30 -03:00
Guo, Yejun
2d5e39c13e dnn: change .model file format to put layer number at the end of file
currently, the layer number is at the beginning of the .model file,
so we have to scan twice in python script, the first scan to get the
layer number. Only one scan needed after put the layer number at the
end of .model file.

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
2019-08-30 11:41:30 -03:00
Guo, Yejun
ddd92ba2c6 convert_from_tensorflow.py: support conv2d with dilation
conv2d with dilation > 1 generates tens of nodes in graph, it is not
easy to parse each node one by one, so we do special tricks to parse
the conv2d layer.

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
2019-08-15 14:58:19 -03:00
Guo, Yejun
2c01434d60 convert_from_tensorflow.py: add option to dump graph for visualization in tensorboard
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
2019-08-15 14:58:19 -03:00
Guo, Yejun
ccbab41039 dnn: convert tf.pad to native model in python script, and load/execute it in the c code.
since tf.pad is enabled, the conv2d(valid) changes back to its original behavior.

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
2019-07-29 12:34:19 -03:00
Guo, Yejun
50e194e6e1 tools/python: add script to convert TensorFlow model (.pb) to native model (.model)
For example, given TensorFlow model file espcn.pb,
to generate native model file espcn.model, just run:
python convert.py espcn.pb

In current implementation, the native model file is generated for
specific dnn network with hard-code python scripts maintained out of ffmpeg.
For example, srcnn network used by vf_sr is generated with
https://github.com/HighVoltageRocknRoll/sr/blob/master/generate_header_and_model.py#L85

In this patch, the script is designed as a general solution which
converts general TensorFlow model .pb file into .model file. The script
now has some tricky to be compatible with current implemention, will
be refined step by step.

The script is also added into ffmpeg source tree. It is expected there
will be many more patches and community needs the ownership of it.

Another technical direction is to do the conversion in c/c++ code within
ffmpeg source tree. While .pb file is organized with protocol buffers,
it is not easy to do such work with tiny c/c++ code, see more discussion
at http://ffmpeg.org/pipermail/ffmpeg-devel/2019-May/244496.html. So,
choose the python script.

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2019-07-01 10:23:47 -03:00