Also documents all options supported by the hwdevice.
This lets users enable all extensions they need without writing their own
instance initialization code.
After this claim was made in e34e361 kamedo2 did an in-depth ABX
test comparing these encoders:
https://hydrogenaud.io/index.php?topic=111085.0
Result: FFmpeg AAC wasn't as good as libfdk_aac on average.
I know some things have changed since then such as, "use the fast
coder as the default" (fcb681ac) for example, so maybe the situation
is different now.
However, I am unaware of any recent comparison. So without any
substantiation we shouldn't make such a blantant claim.
Signed-off-by: Lou Logan <lou@lrcd.com>
Signed-off-by: Gyan Doshi <ffmpeg@gyani.pro>
It's based on the following specs:
RDD 45:2017 - SMPTE Registered Disclosure Doc - Interoperable Master Format - Application ProRes
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
It's based on the following specs:
RDD 36:2015 - SMPTE Registered Disclosure Doc - Apple ProRes Bitstream Syntax and Decoding Process
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
Sequence numbers of segments should be unique, if an encoder is using shorter
than 1 second segments and it is restarted, then future segments will be using
already used sequence numbers if initial sequence number is based on the number
of seconds since epoch and not microseconds.
Signed-off-by: Marton Balint <cus@passwd.hu>
Because not every user know about in_pad and out_pad reasonable value range
so maybe try to set 1.0, but setting 1.0 is so hugh to get an fatal error.
Suggested-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Steven Liu <lq@chinaffmpeg.org>
bump minor version for DOVI sidedata, because added the dovi_meta.h
as lavu API part. Also update APIchanges.
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
Up until now, the Matroska muxer would mark a track as default if it had
the disposition AV_DISPOSITION_DEFAULT or if there was no track with
AV_DISPOSITION_DEFAULT set; in the latter case even more than one track
of a kind (audio, video, subtitles) was marked as default which is not
sensible.
This commit changes the logic used to mark tracks as default. There are
now three modes for this:
a) In the "infer" mode the first track of every type (audio, video,
subtitles) with default disposition set will be marked as default; if
there is no such track (for a given type), then the first track of this
type (if existing) will be marked as default. This behaviour is inspired
by mkvmerge. It ensures that the default flags will be set in a sensible
way even if the input comes from containers that lack the concept of
default flags. This mode is the default mode.
b) The "infer_no_subs" mode is similar to the "infer" mode; the
difference is that if no subtitle track with default disposition exists,
no subtitle track will be marked as default at all.
c) The "passthrough" mode: Here the track will be marked as default if
and only the corresponding input stream had disposition default.
This fixes ticket #8173 (the passthrough mode is ideal for this) as
well as ticket #8416 (the "infer_no_subs" mode leads to the desired
output).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Previously, there was no way to flush an encoder such that after
draining, the encoder could be used again. We generally suggested
that clients teardown and replace the encoder instance in these
situations. However, for at least some hardware encoders, the cost of
this tear down/replace cycle is very high, which can get in the way of
some use-cases - for example: segmented encoding with nvenc.
To help address that use case, we added support for calling
avcodec_flush_buffers() to nvenc and things worked in practice,
although it was not clearly documented as to whether this should work
or not. There was only one previous example of an encoder implementing
the flush callback (audiotoolboxenc) and it's unclear if that was
intentional or not. However, it was clear that calling
avocdec_flush_buffers() on any other encoder would leave the encoder in
an undefined state, and that's not great.
As part of cleaning this up, this change introduces a formal capability
flag for encoders that support flushing and ensures a flush call is a
no-op for any other encoder. This allows client code to check if it is
meaningful to call flush on an encoder before actually doing it.
I have not attempted to separate the steps taken inside
avcodec_flush_buffers() because it's not doing anything that's wrong
for an encoder. But I did add a sanity check to reject attempts to
flush a frame threaded encoder because I couldn't wrap my head around
whether that code path was actually safe or not. As this combination
doesn't exist today, we'll deal with it if it ever comes up.
The current design, where
- proper init is called for the first per-thread context
- first thread's private data is copied into private data for all the
other threads
- a "fixup" function is called for all the other threads to e.g.
allocate dynamically allocated data
is very fragile and hard to follow, so it is abandoned. Instead, the
same init function is used to init each per-thread context. Where
necessary, AVCodecInternal.is_copy can be used to differentiate between
the first thread and the other ones (e.g. for decoding the extradata
just once).
currently, the model outputs the rain, and so need a subtraction
in filter c code to get the final derain result.
I've sent a PR to update the model file and accepted, see at
https://github.com/XueweiMeng/derain_filter/pull/3
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Steven Liu <lq@chinaffmpeg.org>
Related to this are the following changes:
* Mention the GNUmakefile that AviSynth+ provides for installing
just the headers.
* Expand on users installing AviSynth on their system a little
more.
When the user opted to write the Cues at the beginning, the Cues were
simply written without checking in advance whether enough space has been
reserved for them. If it wasn't enough, the data following the space
reserved for the Cues was simply overwritten, corrupting the file.
This commit changes this by checking whether enough space has been
reserved for the Cues before outputting anything. If it isn't enough,
no Cues will be output at all and the file will be finalized normally,
yet writing the trailer will nevertheless return an error to notify
the user that his wish of having Cues at the front of the file hasn't
been fulfilled.
This change opens new usecases for this option: It is now safe to use
this option to e.g. record live streams or to use it when muxing the
output of an expensive encoding, because when the reserved space turns
out to be insufficient, one ends up with a file that just lacks Cues
but is otherwise fine.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
This commit updates the documentation of av_read_frame() to match its
actual behaviour in several ways:
1. On success, av_read_frame() always returns refcounted packets.
2. It can handle uninitialized packets.
3. On error, it always returns blank packets.
This will allow callers to not initialize or unref unnecessarily.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Up until now, it was completely unspecified what the content of the
destination packet dst was on error. Depending upon where the error
happened calling av_packet_unref() on dst might be dangerous.
This commit changes this by making sure that dst is blank on error, so
unreferencing it again is safe (and still pointless). This behaviour is
documented.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
They use non-public functions, which is unacceptable for a public API
example. Rename the example back to avio_list_dir.
This effectively reverts c84d208c27 and
767d780ec0.
The Y channel is handled by dnn, and also resized by dnn. The UV channels
are resized with swscale.
The command to use espcn.pb (see vf_sr) looks like:
./ffmpeg -i 480p.jpg -vf format=yuv420p,dnn_processing=dnn_backend=tensorflow:model=espcn.pb:input=x:output=y -y tmp.espcn.jpg
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Reviewed-by: Pedro Arthur <bygrandao@gmail.com>
Only the Y channel is handled by dnn, the UV channels are copied
without changes.
The command to use srcnn.pb (see vf_sr) looks like:
./ffmpeg -i 480p.jpg -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=tensorflow:model=srcnn.pb:input=x:output=y -y srcnn.jpg
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Reviewed-by: Pedro Arthur <bygrandao@gmail.com>
Supports connecting to a RabbitMQ broker via AMQP version 0-9-1.
Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
rw_timeout is the generic URLcontext option, not the protocol specific timeout
option, also ?rw_timeout never worked because ?timeout was parsed instead.
Signed-off-by: Marton Balint <cus@passwd.hu>
This adds a decoder for Broderbund's sprite-based QuickTime CDToons
codec, based on the decoder I wrote for ScummVM.
Signed-off-by: Alyssa Milburn <amilburn@zall.org>
Required minimal changes to the code so made sense to implement.
FFT and MDCT tested, the output of both was properly rounded.
Fun fact: the non-power-of-two fixed-point FFT and MDCT are the fastest ever
non-power-of-two fixed-point FFT and MDCT written.
This can replace the power of two integer MDCTs in aac and ac3 if the
MIPS optimizations are ported across.
Unfortunately the ac3 encoder uses a 16-bit fixed point forward transform,
unlike the encoder which uses a 32bit inverse transform, so some modifications
might be required there.
The 3-point FFT is somewhat less accurate than it otherwise could be,
having minor rounding errors with bigger transforms. However, this
could be improved later, and the way its currently written is the way one
would write assembly for it.
Similar rounding errors can also be found throughout the power of two FFTs
as well, though those are more difficult to correct.
Despite this, the integer transforms are more than accurate enough.
Compared to ad-hoc if(printed) ... code this allows the user to disable
it by adjusting the log level
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
In order for rate control to correctly allocate bitrate to each temporal
layer, correct temporal layer id has to be set to each frame. This
commit provides the ability to set correct temporal layer id for each
frame.
Signed-off-by: James Zern <jzern@google.com>
As we find ourselves wanting a way to transfer frames between
HW devices (or more realistically, between APIs on the same device),
it's desirable to have a way to describe the relationship. While
we could imagine introducing a `hwtransfer` filter, there is
almost no difference from `hwupload`. The main new feature we need
is a way to specify the target device. Having a single device
for the filter chain is obviously insufficient if we're dealing
with two devices.
So let's add a way to specify the upload target device, and if none
is specified, continue with the existing behaviour.
We must also correctly preserve the sw_format on such a transfer.
This commit adds the necessary code to initialize and use a Vulkan device
within the hwcontext libavutil framework.
Currently direct mapping to VAAPI and DRM frames is functional, and
transfers to CUDA and native frames are supported.
Lets hope the future Vulkan video decode extension fits well within this
framework.
Previously, the default palette would always be used.
Now, we can accept a custom palette, just like dvdsubdec does.
Signed-off-by: Michael Kuron <michael.kuron@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This changes the separator character from comma to colon, but since this option
was only added recently I think it should be done for consistency with other
similar options.
Signed-off-by: Marton Balint <cus@passwd.hu>
This commit reuses the configuration options for VP8 that enables
temporal scalability for VP9. It also adds a way to enable three
preset temporal structures (refer to the documentation for more
detail) that can be used in offline encoding.
Signed-off-by: James Zern <jzern@google.com>
There was no consensus about separating AVExprState from AVExpr so here is a
minimal patch using the existing AVExpr to fix ticket #7528.
Signed-off-by: Marton Balint <cus@passwd.hu>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Suggested-by: Hendrik Leppkes <h.leppkes@gmail.com>
Suggested-by: Nicolas George <george@nsup.org>
Signed-off-by: Steven Liu <lq@chinaffmpeg.org>
Adds support for the ADPCM variant used by some Argonaut Games' games,
such as 'Croc! Legend of the Gobbos', and 'Croc 2'.
Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
It is a common mistake that people only increase fifo_size when they experience
drops, unfortunately this does not help for higher bitrate (> 100 Mbps) streams
when the reader thread simply might not receive the packets in time (especially
under high CPU load) if the default 64 KB of kernel buffer size is used.
New default is determined so that common linux systems can set this buffer size
without tuning kernel parameters.
Signed-off-by: Marton Balint <cus@passwd.hu>
it's stranage to use option "level" in runtime change path but used
"quality" in option, add "quality" in runtime change path, it's more
intuitive and keep the "level" for compatibility.
Reviewe-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
libavformat/img2.h: New field export_path_metadata to
VideoDemuxData to only allow the use of the extra metadata
upon explicit user request, for security reasons.
libavformat/img2dec.c: Modify image2 demuxer to make available
two special metadata entries called lavf.image2dec.source_path
and lavf.image2dec.source_basename, which represents, respectively,
the complete path to the source image for the current frame and
the basename i.e. the file name related to the current frame.
These can then be used by filters like drawtext and others. The
metadata fields will only be available when explicitly enabled
with image2 option -export_path_metadata 1.
doc/demuxers.texi: Documented the new metadata fields available
for image2 and how to use them.
doc/filters.texi: Added an example on how to use the new metadata
fields with drawtext filter, in order to plot the input file path
to each output frame.
Usage example:
ffmpeg -f image2 -export_path_metadata 1 -pattern_type glob
-framerate 18 -i '/path/to/input/files/*.jpg'
-filter_complex drawtext="fontsize=40:fontcolor=white:
fontfile=FreeSans.ttf:borderw=2:bordercolor=black:
text='%{metadata\:lavf.image2dec.source_basename\:NA}':x=5:y=50"
output.avi
Fixes#2874.
Signed-off-by: Alexandre Heitor Schmidt <alexandre.schmidt@gmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
The following is a python script to halve the value of the gray
image. It demos how to setup and execute dnn model with python+tensorflow.
It also generates .pb file which will be used by ffmpeg.
import tensorflow as tf
import numpy as np
from skimage import color
from skimage import io
in_img = io.imread('input.jpg')
in_img = color.rgb2gray(in_img)
io.imsave('ori_gray.jpg', np.squeeze(in_img))
in_data = np.expand_dims(in_img, axis=0)
in_data = np.expand_dims(in_data, axis=3)
filter_data = np.array([0.5]).reshape(1,1,1,1).astype(np.float32)
filter = tf.Variable(filter_data)
x = tf.placeholder(tf.float32, shape=[1, None, None, 1], name='dnn_in')
y = tf.nn.conv2d(x, filter, strides=[1, 1, 1, 1], padding='VALID', name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'halve_gray_float.pb', as_text=False)
print("halve_gray_float.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate halve_gray_float.model\n")
output = sess.run(y, feed_dict={x: in_data})
output = output * 255.0
output = output.astype(np.uint8)
io.imsave("out.jpg", np.squeeze(output))
To do the same thing with ffmpeg:
- generate halve_gray_float.pb with the above script
- generate halve_gray_float.model with tools/python/convert.py
- try with following commands
./ffmpeg -i input.jpg -vf format=grayf32,dnn_processing=model=halve_gray_float.model:input=dnn_in:output=dnn_out:dnn_backend=native out.native.png
./ffmpeg -i input.jpg -vf format=grayf32,dnn_processing=model=halve_gray_float.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow out.tf.png
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
do not request AVFrame's format in vf_ddn_processing with 'fmt',
but to add another filter for the format.
command examples:
./ffmpeg -i input.jpg -vf format=bgr24,dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:dnn_backend=native -y out.native.png
./ffmpeg -i input.jpg -vf format=rgb24,dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:dnn_backend=native -y out.native.png
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
In order to access the original opaque parameter of a buffer in the buffer
pool. (The buffer pool implementation overrides the normal opaque parameter but
also saves it so it is accessible).
v2: add assertion check before dereferencing the BufferPoolEntry.
Signed-off-by: Marton Balint <cus@passwd.hu>
ts_target_bitrate is in kbps, not bps. This commit clarifies the unit
and modifies the example to match the description.
Signed-off-by: James Zern <jzern@google.com>
It performs HDR(High Dynamic Range) to SDR(Standard Dynamic Range) conversion
with tone-mapping. It only supports HDR10 as input temporarily.
An example command to use this filter with vaapi codecs:
FFMPEG -hwaccel vaapi -vaapi_device /dev/dri/renderD128 -hwaccel_output_format vaapi \
-i INPUT -vf 'tonemap_vaapi=format=p010' -c:v hevc_vaapi -profile 2 OUTPUT
Signed-off-by: Xinpeng Sun <xinpeng.sun@intel.com>
Signed-off-by: Zachary Zhou <zachary.zhou@intel.com>
Signed-off-by: Ruiling Song <ruiling.song@intel.com>
add linger parameter to libsrt, it's setting the number of seconds
that the socket waits for unsent data when closing.
Reviewed-by: Andriy Gelman <andriy.gelman@gmail.com>
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
Adjustment of evaluated values shifted to ff_adjust_scale_dimensions
Shifted code for force_original_aspect_ratio and force_divisble_by from
vf_scale so it is now available for scale_cuda, scale_npp and
scale_vaapi as well.
This sets the range of the first automatically assigned PMT PID or elementary
stream PID parameters to [0x20, 0x1ffa]. You can still assign manually a PID
for a stream using AVStream->id in the wider [0x10, 0x1ffe] range as specified
by ISO13818-1. But since DVB and ATSC both reserves some PIDs, let's not allow
them to be automatically assigned.
Also make sure that assigned PID numbers are valid and fix the error message
for the previous PID collision checks.
Signed-off-by: Marton Balint <cus@passwd.hu>
Disable by default to output all the layers, to match libaomdec wrapper.
Add option to select the operating point for the spatial layers.
Update the documentation with the new options.
Signed-off-by: James Almer <jamrial@gmail.com>
This filter accepts all the dnn networks which do image processing.
Currently, frame with formats rgb24 and bgr24 are supported. Other
formats such as gray and YUV will be supported next. The dnn network
can accept data in float32 or uint8 format. And the dnn network can
change frame size.
The following is a python script to halve the value of the first
channel of the pixel. It demos how to setup and execute dnn model
with python+tensorflow. It also generates .pb file which will be
used by ffmpeg.
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('in.bmp')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
filter_data = np.array([0.5, 0, 0, 0, 1., 0, 0, 0, 1.]).reshape(1,1,3,3).astype(np.float32)
filter = tf.Variable(filter_data)
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
y = tf.nn.conv2d(x, filter, strides=[1, 1, 1, 1], padding='VALID', name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
output = sess.run(y, feed_dict={x: in_data})
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'halve_first_channel.pb', as_text=False)
output = output * 255.0
output = output.astype(np.uint8)
imageio.imsave("out.bmp", np.squeeze(output))
To do the same thing with ffmpeg:
- generate halve_first_channel.pb with the above script
- generate halve_first_channel.model with tools/python/convert.py
- try with following commands
./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=native -y out.native.png
./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.pb:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=tensorflow -y out.tf.png
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
This introduces two new AVOption options for the FTP protocol,
one named ftp-user to supply the username to be used for auth,
one named ftp-password to supply the password to be used for auth.
These are useful for when an API user does not wish to deal with
URL manipulation and percent encoding.
Setting them while also having credentials in the URL will use the
credentials from the URL. The rationale for this is that credentials
embedded in the URL are probably more specific to what the user is
trying to do than anything set by some API user.
Signed-off-by: Nicolas Frattaroli <ffmpeg@fratti.ch>
Signed-off-by: Marton Balint <cus@passwd.hu>
Allows user to set maximum number of buffered packets when
probing a codec. It was a hard-coded parameter before this commit.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Implemented as a variant of the hash muxer, reusing most functions,
and making use of the previously introduced array of hashes.
Signed-off-by: Moritz Barsnick <barsnick@gmx.net>
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
When ffmpeg was streaming, multiple clients were only supported by using a
multicast destination address. An alternative was to stream to a server which
re-distributes the content. This commit adds ZeroMQ as a protocol, which allows
multiple clients to connect to a single ffmpeg instance.
Signed-off-by: Marton Balint <cus@passwd.hu>
This is an alias for JEDEC P22.
The name associated with the value is also changed
from jedec-p22 to ebu3213 to match ITU-T H.273.
Signed-off-by: Raphaël Zumer <rzumer@tebako.net>
Signed-off-by: James Almer <jamrial@gmail.com>
Added linux support for amf encoder through vulkan.
To use h.264(AMD VCE) encoder on linux amdgru-pro version 19.20+ and
amf-amdgpu-pro package(amdgru-pro contains, but does not install
automatically) are required.
This driver can be installed using amdgpu-pro-install script in
official amd driver archive.
Initialization of amf encoder occurs in this order:
1) trying to initialize through dx11(only windows)
2) trying to initialize through dx9(only windows)
3) trying to initialize through vulkan
Only Vulkan initialization available on linux.
Add the support of dehaze filter in existing derain filter source
code. As the processing procedure in FFmpeg is the same for current
derain and dehaze, we reuse the derain filter source code. The
model training and generation scripts are in repo
https://github.com/XueweiMeng/derain_filter.git
Reviewed-by: Steven Liu <lq@onvideo.cn>
Signed-off-by: Xuewei Meng <xwmeng96@gmail.com>
The packet counting based approach caused excessive sdt/pat/pmt for VBR, so
let's use a timestamp based approach instead similar to how we emit PCRs.
SDT/PAT/PMT period should be consistent for both VBR and CBR from now on.
Also change the type of sdt_period and pat_period to AV_OPT_TYPE_DURATION so no
floating point math is necessary.
Fixes ticket #3714.
Signed-off-by: Marton Balint <cus@passwd.hu>
Add the usage of tensorflow model in derain filter. Training scripts
as well as scripts for tf/native model generation are provided in the
repository at https://github.com/XueweiMeng/derain_filter.git.
Reviewed-by: Steven Liu <lq@onvideo.cn>
Signed-off-by: Xuewei Meng <xwmeng96@gmail.com>
These functions can be used to print a variable number of strings consecutively
to the IO context. Unlike av_bprintf, no temporary buffer is necessary.
Signed-off-by: Marton Balint <cus@passwd.hu>
This patch adds a new option to the scale filter which ensures that the
output resolution is divisible by the given integer when used together
with `force_original_aspect_ratio`. This works similar to using `-n` in
the `w` and `h` options.
This option respects the value set for `force_original_aspect_ratio`,
increasing or decreasing the resolution accordingly.
The use case for this is to set a fixed target resolution using `w` and
`h`, to use the `force_original_aspect_ratio` option to make sure that
the video always fits in the defined bounding box regardless of aspect
ratio, but to also make sure that the calculated output resolution is
divisible by n so in can be encoded with certain encoders/options if
that is required.
Signed-off-by: Lars Kiesow <lkiesow@uos.de>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The awnser which most people will seek is put first
Reviewed-by: Thilo Borgmann <thilo.borgmann@mail.de>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Only add sequence end code for mpeg1/mpeg2 video, or else use the encoder
libx264 or libx265 in this sample, decoding the output file will get
unknow NALU type error.
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
Add http_seekable option for HTTP partial requests, when The
EXT-X-BYTERANGE tag indicates that a Media Segment is a sub-range
of the resource identified by its URI, we can use HTTP partial
requests to get the Media Segment.
Reviewed-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
Simply moves and templates the actual transforms to support an
additional data type.
Unlike the float version, which is equal or better than libfftw3f,
double precision output is bit identical with libfftw3.
The earlier version had three deficits:
1. It allowed to set the stream to RGB although this is not allowed when
the profile is 0 or 2.
2. If it set the stream to RGB, then it did not automatically set the
range to full range; the result was that one got a warning every time a
frame with color_config element was processed if the frame originally
had TV range and the user didn't explicitly choose PC range. Now one
gets only one warning in such a situation.
3. Intra-only frames in profile 0 are automatically BT.601, but if the
user wished another color space, he was not informed about his wishes
being unfulfillable.
The commit also improves the documentation about this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
The dump_extra bitstream filter currently simply adds the extradata to
the packets indicated by the user without checking whether said
extradata already exists in the packets. Besides wasting space
duplicated extradata in the same packet/access unit is also forbidden
for some codecs, e.g. MPEG-2.
This check has been added to be able to use the mpeg2_qsv encoder (which
only adds the sequence headers to the first packet) in broadcast
scenarios where repeating sequence headers are required.
The check used here is not perfect: E.g. dump_extra would add the
extradata to a H.264 access unit consisting of an access unit delimiter,
SPS, PPS and slices.
Fixes#8007.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>