This exposes libplacebo's frame mixing functionality to vf_libplacebo,
by allowing users to specify a desired target fps to output at. Incoming
frames will be smoothly resampled (in a manner determined by the
`frame_mixer` option, to be added in the next commit).
To generate a consistently timed output stream, we directly use the
desired framerate as the timebase, and simply output frames in
sequential order (tracked by the number of frames output so far).
To present compatibility with the current behavior, we keep track of a
FIFO of exact frame timestamps that we want to output to the user. In
practice, this is essentially equivalent to the current filter_frame()
code, but this design allows us to scale to more complicated use cases
in the future - for example, insertion of intermediate frames
(deinterlacing, frame doubling, conversion to fixed fps, ...)
This does not leverage any immediate benefits, but refactors and
prepares the codebase for upcoming changes, which will include the
ability to do deinterlacing and resampling (frame mixing).
This commit contains no functional change. The goal is merely to
separate the highly intertwined `filter_frame` and `process_frames`
functions into their separate concerns, specifically to separate frame
uploading (which is now done directly in `filter_frame`) from emitting a
frame (which is now done by a dedicated function `output_frame`).
The overall idea here is to be able to ultimately call `output_frame`
multiple times, to e.g. emit several output frames for a single input
frame.
For example
ffmpeg -f lavfi -i sine -af "aformat=cl=stereo|5.1|7.1,lowpass,aformat=cl=7.1|5.1|stereo" -f null -
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Before this commit if allocation would fail in ff_add_channel_layout()
function, function would return negative error code and this would
cause wrong format pick up later. If allocation would not fail return
code would be 0 and then format negotiation would simply fail as code
would break from the loop but with wrong return code.
Error was introduced in 6aaac24d72 commit.
Fixes#6638
This is not public API, no it has no need for an alloc() and free()
functions. The struct can reside on stack.
Signed-off-by: James Almer <jamrial@gmail.com>
THis filter can correct certain issues seen from upstream sources
where the cc_count is not properly set or the CEA-608 tuples are
not at the start of the payload as expected.
Make use of the ccfifo to extract and immediately repack the CEA-708
side data, thereby removing any extra padding and ensuring the 608
tuples are at the front of the payload.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
Because the interlacing filter halves the effective framerate, we
need to ensure that no CEA-708 data is lost as frames are merged.
Make use of the new ccfifo mechanism to ensure that caption data
is properly preserved as frames pass through the filter.
Thanks to Thomas Mundt for review and noticing a couple of
missed codepaths for injection on output. Thanks to Lance Wang
for pointing out a memory leak.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
Various deinterlacing modes have the effect of doubling the
framerate, and we need to ensure that the caption data isn't
duplicated (or else you get double captions on-screen).
Use the new ccfifo mechanism for yadif (and yadif_cuda and bwdif
since they use the same yadif core) so that CEA-708 data is
properly preserved through this filter.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
The existing implementation made an attempt to remove duplicate
captions if increasing the framerate, but made no attempt to
handle reducing the framerate, nor did it rewrite the caption
payloads to have the appropriate cc_count (e.g. the cc_count needs
to change from 20 to 10 when going from 1080i59 to 720p59 and
vice-versa).
Make use of the new ccfifo mechanism to ensure that caption data
is properly preserved.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
When transcoding video that contains 708 closed captions, the
caption data is tied to the frames as side data. Simply dropping
or adding frames to change the framerate will result in loss of
data, so the caption data needs to be preserved and reformatted.
For example, without this patch converting 720p59 to 1080i59
would result in loss of 50% of the caption bytes, resulting in
garbled 608 captions and 708 probably wouldn't render at all.
Further, the frames that are there will have an illegal
cc_count for the target framerate, so some decoders may ignore
the packets entirely.
Extract the 608 and 708 tuples and insert them onto queues. Then
after dropping/adding frames, re-write the tuples back into the
resulting frames at the appropriate rate given the target
framerate. This includes both having the correct cc_count as
well as clocking out the 608 pairs at the appropriate rate.
Thanks to Lance Wang <lance.lmwang@gmail.com>, Anton
Khirnov <anton@khirnov.net>, and Michael Niedermayer <michael@niedermayer.cc>
for providing review/feedback.
Signed-off-by: Devin Heitmueller <dheitmueller@ltnglobal.com>
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
Motivated by a desire to use vf_libplacebo as a GPU-accelerated
cropping/padding/zooming filter. This commit adds support for setting
the `input/target.crop` fields as dynamic expressions.
Re-use the same generic variables available to other scale and crop type
filters, and also add some more that we can afford as a result of being
able to set these properties dynamically.
It's worth pointing out that `out_t/ot` is currently redundant with
`in_t/t` since it will always contain the same PTS values, but I plan on
changing this in the near future.
I decided to also expose `crop_w/crop_h` and `pos_w/pos_h` as variables
in the expression parser itself, since this enables the fairly common
use case of determining dimensions first and then placing the image
appropriately, such as is done in the default behavior (which centers
the cropped/placed region by default).
In some circumstances, libplacebo will clear the background as a result
of cropping/padding. Currently, this uses the hard-coded default fill
color of black. This option makes this behavior configurable.
Will remove native backend, so change the default backend in filters,
and also remove the python scripts which generate native model file.
Signed-off-by: Ting Fu <ting.fu@intel.com>
Native backend will be removed in following commits, so change the
dnn interface and modify the error message in it first.
Signed-off-by: Ting Fu <ting.fu@intel.com>
Not doing so is an obvious oversight - the ICC profile is tied to the
original colorspace, so if we change it, we should definitely strip this
information.
We should probably also have an extra option to control whether the ICC
profile should be stripped, ignored, or applied, but for now this fixes
an existing bug.
Otherwise main and overlay videos share the same input region. Note NULL
pointer imples the whole overlay video will be processed.
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Clarify failure in case of x/y building a too big matrix.
Example:
$ ffmpeg -hide_banner -f lavfi -i color=c=white:size=640x360,unsharp=lx=5:ly=23 -f null -t 1 -
[Parsed_unsharp_1 @ 0x5650e1c30240] luma matrix size (lx/2+ly/2)*2=26 greater than maximum value 25
color=c=white:size=640x360,unsharp=lx=5:ly=23: Invalid argument
Fix trac issue:
http://trac.ffmpeg.org/ticket/6033
Otherwise the output format is not changed when output is in system
memory. For example, the output format is still p010le in the following
case:
$ ffmpeg -qsv_device /dev/dri/renderD128 -f lavfi -i testsrc -vf
"format=p010le,vpp_qsv=extra_hw_frames=8:format=nv12" -f null -
...
Output #0, null, to 'pipe:':
Metadata:
encoder : Lavf60.4.100
Stream #0:0: Video: wrapped_avframe, p010le(tv, progressive), 320x240
[SAR 1:1 DAR 4:3], q=2-31, 200 kb/s, 25 fps, 25 tbn
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Add stack_internal.h to the list of skipped headers to fix
make checkheaders fail, it's introduced by commit 742dfa281
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
These fields are supposed to store information about the packet the
frame was decoded from, specifically the byte offset it was stored at
and its size.
However,
- the fields are highly ad-hoc - there is no strong reason why
specifically those (and not any other) packet properties should have a
dedicated field in AVFrame; unlike e.g. the timestamps, there is no
fundamental link between coded packet offset/size and decoded frames
- they only make sense for frames produced by decoding demuxed packets,
and even then it is not always the case that the encoded data was
stored in the file as a contiguous sequence of bytes (in order for pos
to be well-defined)
- pkt_pos was added without much explanation, apparently to allow
passthrough of this information through lavfi in order to handle byte
seeking in ffplay. That is now implemented using arbitrary user data
passthrough in AVFrame.opaque_ref.
- several filters use pkt_pos as a variable available to user-supplied
expressions, but there seems to be no established motivation for using them.
- pkt_size was added for use in ffprobe, but that too is now handled
without using this field. Additonally, the values of this field
produced by libavcodec are flawed, as described in the previous
ffprobe conversion commit.
In summary - these fields are ill-defined and insufficiently motivated,
so deprecate them.
Rescale the timestamp for AVERROR_EOF. This can fix tickets 10261 and
10262.
Tested-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
SSIM360Context.ssim360_hist is an array of four pointers to double;
so sizeof(*ssim360_hist[0]) (=sizeof(double)) is the correct size
to use to calculate the amount of memory to allocate, not
sizeof(*ssim360_hist) (which is sizeof(double*)).
Use FF_ALLOCZ_TYPED_ARRAY to avoid this issue altogether.
Fixes Coverity issue #1520671.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Reviewed-by: Jan Ekström <jeebjp@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This has not been functional since a year ago, including in our current
minimum dependency of libplacebo (v4.192.0). It also causes build errors
against libplacebo v6, so it needs to be removed from the code. We can
keep the option around for now, but it should also be removed soon.
Signed-off-by: Niklas Haas <git@haasn.dev>
Signed-off-by: James Almer <jamrial@gmail.com>
This reverts commit 205117d87f.
This didn't fix make checkheaders after all, and also broke compilation in some
scenarios.
Signed-off-by: James Almer <jamrial@gmail.com>
Restores the behavior of naming the instance filter@id, which was accidentally changed
to simpy id in commit f17051eaae.
Fixes ticket #10226.
Signed-off-by: James Almer <jamrial@gmail.com>
Callers currently have two ways of adding filters to a graph - they can
either
- create, initialize, and link them manually
- use one of the avfilter_graph_parse*() functions, which take a
(typically end-user-written) string, split it into individual filter
definitions+options, then create filters, apply options, initialize
filters, and finally link them - all based on information from this
string.
A major problem with the second approach is that it performs many
actions as a single atomic unit, leaving the caller no space to
intervene in between. Such intervention would be useful e.g. to
- modify filter options;
- supply hardware device contexts;
both of which typically must be done before the filter is initialized.
Callers who need such intervention are then forced to invent their own
filtergraph parsing, which is clearly suboptimal.
This commit aims to address this problem by adding a new modular
filtergraph parsing API. It adds a new avfilter_graph_segment_parse()
function to parse a string filtergraph description into an intermediate
tree-like representation (AVFilterGraphSegment and its children).
This intermediate form may then be applied step by step using further
new avfilter_graph_segment*() functions, with user intervention possible
between each step.
Also, replace an AVFilterContext argument with a logging context+private
class, as those are the only things needed in this function.
Will be useful in future commits.