1
0
mirror of https://github.com/FFmpeg/FFmpeg.git synced 2025-02-09 14:14:39 +02:00

8435 Commits

Author SHA1 Message Date
Paul B Mahol
cca982ee01 avfilter/vf_colorbalance: remove wrong addition 2020-06-29 14:52:37 +02:00
Limin Wang
12c42c709e avfilter/vf_showinfo: add a \n for end of ERROR and WARNNING log
Note for info level, one extra \n will be print after the log.

Reviewed-by:   Paul B Mahol <onemda@gmail.com>
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
2020-06-28 09:00:28 +08:00
exwm
32d6fe23b6 avfilter/zoompan: add in_time variable
Currently, the zoompan filter exposes a 'time' variable (missing from docs) for use in
the 'zoom', 'x', and 'y' expressions. This variable is perhaps better named
'out_time' as it represents the timestamp in seconds of each output frame
produced by zoompan. This patch adds aliases 'out_time' and 'ot' for 'time'.

This patch also adds an 'in_time' (alias 'it') variable that provides access
to the timestamp in seconds of each input frame to the zoompan filter.
This helps to design zoompan filters that depend on the input video timestamps.
For example, it makes it easy to zoom in instantly for only some portion of a video.
Both the 'out_time' and 'in_time' variables have been added in the documentation
for zoompan.

Example usage of 'in_time' in the zoompan filter to zoom in 2x for the
first second of the input video and 1x for the rest:
    zoompan=z='if(between(in_time,0,1),2,1):d=1'

V2: Fix zoompan filter documentation stating that the time variable
would be NAN if the input timestamp is unknown.

V3: Add 'it' alias for 'in_time. Add 'out_time' and 'ot' aliases for 'time'.
Minor corrections to zoompan docs.

Signed-off-by: exwm <thighsman@protonmail.com>
2020-06-25 10:27:07 +02:00
Ting Fu
13f5613e68 dnn_backend_native_layer_mathunary: add atan support
It can be tested with the model generated with below python script:

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.atan(x)
x2 = tf.divide(x1, 3.1416/4) # pi/4
y = tf.identity(x2, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Guo Yejun <yejun.guo@intel.com>
2020-06-25 08:41:50 +08:00
Ting Fu
461485feac dnn_backend_native_layer_mathunary: add acos support
It can be tested with the model generated with below python script:

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.acos(x)
x2 = tf.divide(x1, 3.1416/2) # pi/2
y = tf.identity(x2, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Guo Yejun <yejun.guo@intel.com>
2020-06-25 08:41:50 +08:00
Ting Fu
486c0c419d dnn_backend_native_layer_mathunary: add asin support
It can be tested with the model generated with below python script:

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.asin(x)
x2 = tf.divide(x1, 3.1416/2) # pi/2
y = tf.identity(x2, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Guo Yejun <yejun.guo@intel.com>
2020-06-25 08:41:50 +08:00
Paul B Mahol
ce297b44d3 avfilter/vf_v360: do not ignore return value of allocate_plane() 2020-06-23 21:55:40 +02:00
Paul B Mahol
00a5df71ad avfilter/vf_v360: add orthographic projection support 2020-06-23 16:00:02 +02:00
Paul B Mahol
44ce333f03 avfilters/vf_v360: add equisolid projection support 2020-06-22 14:41:36 +02:00
Andreas Rheinhardt
3f2be5372e avfilter/vf_showpalette: Don't pretend disp_palette can fail
It can't fail, yet it returns an int and other code checks whether it
failed; yet if it did fail, an AVFrame would leak. One could of course
add an av_frame_free for this (that compilers could optimize away), yet
it is easier to simply stop pretending that disp_palette could fail.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
2020-06-22 13:52:01 +02:00
Paul B Mahol
fdac3c80ac avfilter/af_ladspa: check return value of getenv() 2020-06-21 21:35:40 +02:00
Paul B Mahol
683a1599d4 avfilter/af_ladspa: add latency compensation 2020-06-21 21:35:40 +02:00
Paul B Mahol
842bc312ad avfilter/af_ladspa: check another directory for plugins 2020-06-21 14:48:27 +02:00
Limin Wang
548ef7a12b avfilter: add D2TS, TS2D, TS2T as a common macro in internal.h
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
2020-06-19 23:12:49 +08:00
Limin Wang
dacae40a4b avfilter/vf_overlay: add yuv420p10 and yuv422p10 10bit format support
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
2020-06-19 07:14:46 +08:00
Limin Wang
4d787c16e8 avfilter/vf_overlay: support for 8bit and 10bit overlay with macro-based function
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
2020-06-19 07:14:46 +08:00
Guo Yejun
0b3bd001ac dnn_backend_native: check operand index
it fixed the issue in https://trac.ffmpeg.org/ticket/8716
2020-06-17 13:42:52 +08:00
Guo Yejun
fc932195ab dnn_backend_native.c: refine code for fail case 2020-06-17 13:42:52 +08:00
Limin Wang
567d571b20 avfilter/vf_showinfo: display H.26[45] user data unregistered sei message
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
2020-06-15 07:19:55 +08:00
Paul B Mahol
c0e7164ba6 avfilter/vf_vaguedenoiser: fix small typo in option explanation 2020-06-13 00:41:16 +02:00
Paul B Mahol
e65d76fb94 avfilter/af_rubberband: adjust nb_samples after every command 2020-06-13 00:21:07 +02:00
Ting Fu
22d0860c13 dnn_backend_native_layer_mathunary: add tan support
It can be tested with the model generated with below python scripy

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.multiply(x, 0.78)
x2 = tf.tan(x1)
y = tf.identity(x2, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Guo Yejun <yejun.guo@intel.com>
2020-06-11 11:10:51 +08:00
Ting Fu
88fb494f42 dnn_backend_native_layer_mathunary: add cos support
It can be tested with the model generated with below python scripy

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.multiply(x, 1.5)
x2 = tf.cos(x1)
y = tf.identity(x2, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Guo Yejun <yejun.guo@intel.com>
2020-06-11 11:10:51 +08:00
Ting Fu
0b6d3f0d83 dnn_backend_native_layer_mathunary: add sin support
It can be tested with the model file generated with below python scripy:

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.multiply(x, 3.14)
x2 = tf.sin(x1)
y = tf.identity(x2, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Guo Yejun <yejun.guo@intel.com>
2020-06-11 11:10:51 +08:00
Anton Khirnov
c7d8d8d8d9 vf_spp: switch to child_class_iterate() 2020-06-10 12:36:44 +02:00
Anton Khirnov
6bfac4ee6f vf_scale: switch to child_class_iterate() 2020-06-10 12:36:44 +02:00
Anton Khirnov
344149cf01 framesync: switch to child_class_iterate() 2020-06-10 12:36:44 +02:00
Anton Khirnov
aba98de6b8 avfilter: switch to child_class_iterate() 2020-06-10 12:36:44 +02:00
Anton Khirnov
342230a537 af_resample: switch to child_class_iterate() 2020-06-10 12:36:44 +02:00
Anton Khirnov
3dd324427a af_aresample: switch to child_class_iterate() 2020-06-10 12:36:44 +02:00
Anton Khirnov
0d6b4351c6 Remove unnecessary use of avcodec_close().
Replace it with avcodec_free_context() or drop it completely as
appropriate.
2020-06-10 11:31:16 +02:00
Michael Niedermayer
c5079bf3bc Bump minor versions after branching 4.3
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2020-06-08 22:49:04 +02:00
Michael Niedermayer
0a8a96c251 Bump minor versions to separate 4.3 from master
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2020-06-08 22:49:04 +02:00
Paul B Mahol
bd6336b970 avfilter/vf_vaguedenoiser: add new type of threshold 2020-06-07 15:20:25 +02:00
Paul B Mahol
6c57b0d63a avfilter/vf_vaguedenoiser: remove excessive code from soft thresholding 2020-06-07 15:20:11 +02:00
Paul B Mahol
7826fbfeaa avfilter/avf_showspectrum: properly handle EOF case 2020-06-06 19:49:14 +02:00
Paul B Mahol
1c32d7dfcf avfilter/asrc_anoisesrc: switch to activate
Allows to set EOF timestamp.
2020-06-06 15:53:07 +02:00
Wu Zhiwen
b6d7c4c1d4 dnn/native: fix typo for definition of DOT_INTERMEDIATE
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2020-06-03 09:57:22 +08:00
Andreas Rheinhardt
317b722c51 avfilter/vf_lut3d: Fix mixed declaration and code
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
2020-06-01 15:21:40 +02:00
Mark Reid
a1221b96d8 avfilter/vf_lut3d: prelut support for 3d cinespace luts
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2020-05-31 00:55:12 +02:00
Paul B Mahol
1329db8cfb avfilter/af_aiir: simplify polynomial evaluation 2020-05-30 18:04:14 +02:00
Paul B Mahol
aac16abd92 avfilter/af_aiir: use correct size when allocating in zp2tf 2020-05-30 18:04:14 +02:00
Paul B Mahol
726dbc57f8 avfilter: add dblur video filter 2020-05-30 18:04:14 +02:00
Jun Zhao
018cd437f8 lavfi/aiir: Refine the pad/vpad related operation
move the pad/vpad related operation with more natural
coding style.

Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
2020-05-30 19:02:43 +08:00
Jun Zhao
ff8329a730 lavfi/afir: fix vpad.name leak
Fix vpad.name leak in error path, move the vpad related operation
only if enabled show IR frequency response.

Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
2020-05-30 19:02:34 +08:00
Paul B Mahol
6485b54477 Revert "avfilter/af_aiir: move response drawing as last step"
This reverts commit ca7095a9072fab4cdb41af12da9d94752e082e34.
2020-05-30 10:05:19 +02:00
Paul B Mahol
3fc7b01c52 avfilter/af_aiir: improve response calculation with zp coefficients 2020-05-30 10:05:19 +02:00
Paul B Mahol
e2e8121eaa avfilter/af_aiir: add S-plane support 2020-05-30 10:05:19 +02:00
Paul B Mahol
327b52412d avfilter/af_aiir: make it clear that transfer function is digital one 2020-05-30 10:05:19 +02:00
Paul B Mahol
1206a10d9c avfilter/af_biquads: implement 1st order allpass 2020-05-30 09:57:04 +02:00