When use new decode APIs(avcodec_send_packet/avcodec_receive_frame),
don't need to setting the deprecated field refcounted_frames.
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Jun Zhao <mypopydev@gmail.com>
When use new decode APIs(avcodec_send_packet/avcodec_receive_frame),
don't need to setting the deprecated field refcounted_frames.
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Jun Zhao <mypopydev@gmail.com>
When use new decode APIs(avcodec_send_packet/avcodec_receive_frame),
don't need to setting the deprecated field refcounted_frames.
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Jun Zhao <mypopydev@gmail.com>
The logic is applicable only when use_template is enabled and use_timeline
is disabled. The logic monitors the flow of segment indexes. If a streams's
segment index value is not at the expected real time position, then
the logic corrects that index value.
Typically this logic is needed in live streaming use cases. The network
bandwidth fluctuations are common during long run streaming. Each
fluctuation can cause the segment indexes fall behind the expected real
time position. Without this logic, players will not be able to consume
the content, even after encoder's network condition comes back to
normal state.
When use_template is enabled and use_timeline is disabled, typically
it is required to generate the segments at the configured segment duration
rate on an average. This commit is particularly needed to handle the
segmentation when video frame rates are fractional like 29.97 or 59.94 fps.
There are use cases where average segment duration needs to be configured
and muxer is expected to maintain the average segment duration. So, using
the name 'min_seg_duration' will be misleading. So, changing the parameter
name to 'seg_duration', where it can be minimum segment duration or average
segment duration based on the use-case. The additional updates needed for
this functinality are made the sub-sequent patches of this patch series.
In some cases, mainly working with multiprogram mpeg-ts containers as
input, it would be handy to select sub stream of a specific program by
their metadata.
This patch makes it possible to narrow the stream selection among
streams of the specified program by stream metadata.
Examples:
p:601:m:language:hun will select all sub streams of program with id 601
where sub streams have metadata key named 'language' with value 'hun'.
p:602:m:guide will select all sub streams of program with id 602 where
sub streams have metadata key named 'guide'.
Signed-off-by: Bela Bodecs <bodecsb@vivanet.hu>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
When using hls_list_size with hls_flags delete_segments, currently
hls_list_size * 2 +- segments remain on disk. With this new option,
the amount of disk space used can be controlled by the user.
fix ticket: #7131
Signed-off-by: Steven Liu <lq@chinaffmpeg.org>
Signed-off-by: Aman Gupta <aman@tmm1.net>
Currently when specifying the program id you can only decide to select
all stream of the specified program (e.g. p:103 will select all streams
of program 103) or narrow the selection to a specific stream sub index
(e.g. p:145:1 will select 2nd stream of program 145.) But you can not
specify like all audio streams of program 145 or 3rd video stream of
program 311.
In some case, mainly working with multiprogram mpeg-ts containers as
input, this feature would be handy.
This patch makes it possible to narrow the stream selection among
streams of the specified program by stream type and optionally its
index. Handled types: a, v, s, d.
Examples: p:601:a will select all audio streams of program 601,
p:603:a:1 will select 2nd audio streams of program 603,
p:604:v:0 will select first video stream of program 604.
Signed-off-by: Bela Bodecs <bodecsb@vivanet.hu>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This seems to confuse Github users into thinking that we may accept pull
requests. We do not accept pull requests.
Sending patches to the ffmpeg-devel mailing list is our preferred method
for users to contribute code.
Signed-off-by: Lou Logan <lou@lrcd.com>
Info about default value of bind_address option and its abbreviated
version (b). Example modified to have named instanced filter and to show
its use.
Signed-off-by: Bela Bodecs <bodecsb@vivanet.hu>
Signed-off-by: Lou Logan <lou@lrcd.com>
PSEUDOPAL pixel formats are not paletted, but carried a palette with the
intention of allowing code to treat unpaletted formats as paletted. The
palette simply mapped the byte values to the resulting RGB values,
making it some sort of LUT for RGB conversion.
It was used for 1 byte formats only: RGB4_BYTE, BGR4_BYTE, RGB8, BGR8,
GRAY8. The first 4 are awfully obscure, used only by some ancient bitmap
formats. The last one, GRAY8, is more common, but its treatment is
grossly incorrect. It considers full range GRAY8 only, so GRAY8 coming
from typical Y video planes was not mapped to the correct RGB values.
This cannot be fixed, because AVFrame.color_range can be freely changed
at runtime, and there is nothing to ensure the pseudo palette is
updated.
Also, nothing actually used the PSEUDOPAL palette data, except xwdenc
(trivially changed in the previous commit). All other code had to treat
it as a special case, just to ignore or to propagate palette data.
In conclusion, this was just a very strange old mechnaism that has no
real justification to exist anymore (although it may have been nice and
useful in the past). Now it's an artifact that makes the API harder to
use: API users who allocate their own pixel data have to be aware that
they need to allocate the palette, or FFmpeg will crash on them in
_some_ situations. On top of this, there was no API to allocate the
pseuo palette outside of av_frame_get_buffer().
This patch not only deprecates AV_PIX_FMT_FLAG_PSEUDOPAL, but also makes
the pseudo palette optional. Nothing accesses it anymore, though if it's
set, it's propagated. It's still allocated and initialized for
compatibility with API users that rely on this feature. But new API
users do not need to allocate it. This was an explicit goal of this
patch.
Most changes replace AV_PIX_FMT_FLAG_PSEUDOPAL with FF_PSEUDOPAL. I
first tried #ifdefing all code, but it was a mess. The FF_PSEUDOPAL
macro reduces the mess, and still allows defining FF_API_PSEUDOPAL to 0.
Passes FATE with FF_API_PSEUDOPAL enabled and disabled. In addition,
FATE passes with FF_API_PSEUDOPAL set to 1, but with allocation
functions manually changed to not allocating a palette.
It works as a drop in replacement for the deprecated av_dup_packet(),
to ensure a packet is reference counted.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: James Almer <jamrial@gmail.com>
avdevice_register_all() is still required to register devices into
lavf (this is required due to lavd being somewhat of a hack).
Signed-off-by: Josh de Kock <josh@itanimul.li>
This reverts commit 0fd475704e.
Revert "lavd: fix iterating of input and output devices"
This reverts commit ce1d77a5e7.
Signed-off-by: Josh de Kock <josh@itanimul.li>
* commit 'c438899a706422b8362a13714580e988be4d638b':
Add AV1 video decoding support through libaom
This contains some extra changes taken from the libvpx decoder
wrapper, most of them contained in the set_pix_fmt() function.
Merged-by: James Almer <jamrial@gmail.com>
The protocol requires libsrt (https://github.com/Haivision/srt) to be
installed
Signed-off-by: Sven Dueking <sven.dueking@nablet.com>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
This is for applications which want to explicitly check for invalid
UTF-8 manually, and take actions that are better than dropping invalid
subtitles silently. (It's pretty much silent because sporadic avcodec
error messages are so common that you can't reasonably display them in a
prominent and meaningful way in a application GUI.)
This new side-data will contain info on how a packet is encrypted.
This allows the app to handle packet decryption.
Signed-off-by: Jacob Trimble <modmaker@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This can remove units with types in or not in a given set from a stream.
For example, it can be used to remove all non-VCL NAL units from an H.264 or
H.265 stream.
This adds a way for an API user to transfer QP data and metadata without
having to keep the reference to AVFrame, and without having to
explicitly care about QP APIs. It might also provide a way to finally
remove the deprecated QP related fields. In the end, the QP table should
be handled in a very similar way to e.g. AV_FRAME_DATA_MOTION_VECTORS.
There are two side data types, because I didn't care about having to
repack the QP data so the table and the metadata are in a single
AVBufferRef. Otherwise it would have either required a copy on decoding
(extra slowdown for something as obscure as the QP data), or would have
required making intrusive changes to the codecs which support export of
this data.
The new side data types are added under deprecation guards, because I
don't intend to change the status of the QP export as being deprecated
(as it was before this patch too).
The default behavior of the mediacodec decoder before this commit
was to delay flushes until all pending hardware frames were
returned to the decoder. This was useful for certain types of
applications, but was unexpected behavior for others.
The new default behavior with this commit is now to execute
flushes immediately to invalidate all pending frames. The old
behavior can be enabled by setting delay_flush=1.
With the new behavior, video players implementing seek can simply
call flush on the decoder without having to worry about whether
they have one or more mediacodec frames still buffered in their
rendering pipeline. Previously, all these frames had to be
explictly freed (or rendered) before the seek/flush would execute.
The new behavior matches the behavior of all other lavc decoders,
reducing the amount of special casing required when using the
mediacodec decoder.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Signed-off-by: Matthieu Bouron <matthieu.bouron@gmail.com>
Update dump_extra bit stream filter docs to follow current
code implement.
Signed-off-by: Jun Zhao <jun.zhao@intel.com>
Reviewed-by: Steven Liu <lq@onvideo.cn>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This patch makes it possible to dinamically close the current segment
and step to the next one by introducing command handling capabilities
into the filter. This new feature is very usefull when working with
real-time sources or live streams as source. Combinig usage with zmqsend
tool you can interactively end the current segment and step to next one.
Signed-off-by: Bela Bodecs <bodecsb@vivanet.hu>
documents delete_filler option for bsf h264_metadata.
Signed-off-by: Jun Zhao <jun.zhao@intel.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This commit adds an indev for Android devices on API level 24+ which
uses the Android NDK Camera2 API to capture video from builtin cameras
Signed-off-by: Felix Matouschek <felix@matouschek.org>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
In a recent commit the default was changed from 0 (component) to 5
(unspecified), however some standards require using 0. With this option, the
user will be able to do so.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Marton Balint <cus@passwd.hu>
* commit '6d86cef06ba36c0ed591e14a2382e9630059fc5d':
lavfi: Add support for increasing hardware frame pool sizes
Merged-by: Mark Thompson <sw@jkqxz.net>
* commit '5b145290df2998a9836a93eb925289c6c8b63af0':
lavc: Add support for increasing hardware frame pool sizes
Merged-by: Mark Thompson <sw@jkqxz.net>
AVCodecContext.extra_hw_frames is added to the size of hardware frame
pools created by libavcodec for APIs which require fixed-size pools.
This allows the user to keep references to a greater number of frames
after decode, which may be necessary for some use-cases.
It is also added to the initial_pool_size value returned by
avcodec_get_hw_frames_parameters() if a fixed-size pool is required.
avcodec bump missed in 7e8eba2d87
avformat bump missed in ff46124b0d and
0694d87024
avdevice bump missed in 0fd475704e
Signed-off-by: James Almer <jamrial@gmail.com>
The "timeout" option name inherently clashes with the meaning of the
HTTP libavformat protocol option with the same name. Rename it after a
deprecation period to make it compatible with the HTTP one.
Usage is:
./vaapi_transcode input_stream codec output_stream
For example:
- ./vaapi_transcode input.mp4 h264_vaapi output_h264.mp4
- ./vaapi_transcode input.mp4 vp8_vaapi output_vp8.ivf
Does not handle resolution changes on the input stream.
Signed-off-by: Jun Zhao <jun.zhao@intel.com>
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>
Signed-off-by: Mark Thompson <sw@jkqxz.net>
This will replace the 1024 character limited filename field. Compatiblity for
output contexts are provided by copying filename field to URL if URL is unset
and by providing an internal function for muxers to set both url and filename
at once.
Signed-off-by: Marton Balint <cus@passwd.hu>
The seek function can just return an error if seeking is unavailable,
but often this is too late. Add a flag that signals that the stream is
unseekable, and use it in HLS.
* commit '34c113335b53d83ed343de49741f0823aa1f8cc6':
Add support for H.264 and HEVC hardware encoding for AMD GPUs based on AMF SDK
Most of this was already present from 9ea6607d29,
this just applies some minor fixups and adds the general documentation.
Merged-by: Mark Thompson <sw@jkqxz.net>
* commit '1efbbfedcaf4a3cecab980273ad809ba3ade2f74':
examples/qsvdec: do not set the deprecated field refcounted_frames
Merged-by: Mark Thompson <sw@jkqxz.net>
It was sort of optional before - if you didn't call it, networking was
initialized on demand, and an ugly warning was logged. Also, the doxygen
comments threatened that it would be made strictly required one day.
Make it explicitly optional. I would prefer to deprecate it fully, but
there might still be legitimate reasons to use this. But the average
user won't need it.
This is needed only for two reasons: to initialize TLS libraries like
OpenSSL and GnuTLS, and winsock.
OpenSSL and GnuTLS were already silently initialized on demand if the
global network init function was not called. They also have various
thread-safety acrobatics, which make concurrent initialization within
libavformat safe. In addition, the libraries are moving towards making
their global init functions safe, which removes all need for central
global init. In particular, GnuTLS 3.5.16 and OpenSSL 1.1.0g have been
found to have safe init functions. In all cases, they use internal
reference counters to avoid that the global uninit functions interfere
with concurrent uses of the library by other API users who called global
init.
winsock should be thread-safe as well, and maintains an internal
reference counter as well.
Since we still support ancient TLS libraries, which do not have this
fixed, and since it's unknown whether winsock and GnuTLS
reinitialization is costly in any way, don't deprecate the libavformat
functions yet.
The twoloop coder sounds decent at low bitrates, however at higher bitrates
it sounds worse than the fast coder (which used to be the old twoloop coder
before October 2015) and needs quite a lot more CPU.
Change the default to fast. It has been well tested and has had little changes
over the years so its been confirmed to be quite stable.
Also change its description (not valid for more than a year) and the
documentation.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
avcodec_free_context have handle NULL pointer case, so caller doesn't
need to check the NULL before call this function.
Signe-off-by: Jun Zhao <jun.zhao@intel.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Requires AMF headers for at least version 1.4.4.1.
Signed-off-by: Mikhail Mironov <mikhail.mironov@amd.com>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Deprecate the entire library. Merged years ago to provide compatibility
with Libav, it remained unmaintained by the FFmpeg project and duplicated
functionality provided by libswresample.
In order to improve consistency and reduce attack surface, as well as to ease
burden on maintainers, it has been deprecated. Users of this library are asked
to migrate to libswresample, which, as well as providing more functionality,
is faster and has higher accuracy.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
It is used by the deprecated API avcodec_decode_video2 and ignored by the
new decode APIs (avcodec_send_packet/avcodec_receive_frame).
Signed-off-by: Zhong Li <zhong.li@intel.com>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Some http/1.0 implementations, like python's SimpleHTTPServer, can only support one client connection at a time. Making a second request while the first is still connected leads to a deadlock.
This change enables multiple connections for http/1.1 servers only, which need to support keepalive by default and should have no problem with concurrent requests.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Can be used by the api user to figure out what http features the server supports based on the response received.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Use static mutexes instead of requiring a lock manager. The behavior
should be roughly the same before and after this change for API users
which did not set the lock manager at all (except that a minor memory
leak disappears).
This improves network throughput of the hls demuxer by avoiding
the latency introduced by downloading segments one at a time.
The problem is particularly noticable over high-latency network
connections: for instance, if RTT is 250ms, there will a 250ms idle
period between when one segment response is read and the next one
starts.
The obvious solution to this is to use HTTP pipelining, where a
second request can be sent (on the persistent http/1.1 connection)
before the first response is fully read. Unfortunately the way the
http protocol is implemented in avformat makes implementing pipleining
very complex.
Instead, this commit simulates pipelining using two separate persistent
http connections. This has the advantage of working independently of
the http_persistent option, and can be used with http/1.0 servers as
well. The pair of connections is swapped every time a new segment starts
downloading, and a request for the next segment is sent on the secondary
connection right away. This means the second response will be ready and
waiting by the time the current response is fully read.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Signed-off-by: Anssi Hannula <anssi.hannula@iki.fi>
This teaches the HLS demuxer to use the HTTP protocols
multiple_requests=1 option, to take advantage of "Connection:
Keep-Alive" when downloading playlists and segments from the HLS server.
With the new option, you can avoid TCP connection and TLS negotiation
overhead, which is particularly beneficial when streaming via a
high-latency internet connection.
Similar to the http_persistent option recently implemented in hlsenc.c
Signed-off-by: Aman Gupta <aman@tmm1.net>
Signed-off-by: Anssi Hannula <anssi.hannula@iki.fi>
- normalize score to [0..100] instead of [0..85]
- change the default score to 8.2 to roughly keep existing behaviour
- take into account bit depth
- do not truncate to integer
Signed-off-by: Marton Balint <cus@passwd.hu>
If the user-supplied color in drawbox and drawgrid filters is non-opaque,
the box & grid painting overwrites the input's pixels (including alpha).
Users typically expect the alpha of the specified color to only act as a key
for compositing on top of the main input.
Added option allows users to select between replacement and composition.
Tested and documented.
Explicitly identify decoder/encoder wrappers with a common name. This
saves API users from guessing by the name suffix. For example, they
don't have to guess that "h264_qsv" is the h264 QSV implementation, and
instead they can just check the AVCodec .codec and .wrapper_name fields.
Explicitly mark AVCodec entries that are hardware decoders or most
likely hardware decoders with new AV_CODEC_CAPs. The purpose is allowing
API users listing hardware decoders in a more generic way. The proposed
AVCodecHWConfig does not provide this information fully, because it's
concerned with decoder configuration, not information about the fact
whether the hardware is used or not.
AV_CODEC_CAP_HYBRID exists specifically for QSV, which can have software
implementations in case the hardware is not capable.
Based on a patch by Philip Langdale <philipl@overt.org>.
Merges Libav commit 47687a2f8a.
Additionally:
* Mention that allrgb and allyuv do not support the "size" option.
* Separate examples into subsection.
Fixes ticket #6906.
Signed-off-by: Lou Logan <lou@lrcd.com>
The Developer Documentation had instructions to
subscribe to the ffmpeg-cvslog email list. But that is
no longer accurate. For the purposes in this section --
review of patches, discussion of development issues --
ffmpeg_devel is the appropriate email list. Some developers
may want to monitor ffmpeg-cvslog, but it is not mandatory.
This is v3 of this doc, based on discussion in thread
<https://ffmpeg.org/pipermail/ffmpeg-devel/2017-November/220528.html>
and in response to docs Maintainer comments in
<https://ffmpeg.org/pipermail/ffmpeg-devel/2017-December/221596.html>.
1. In doc/developer.texi, add a new section about
ffmpeg-devel, based on existing text from ffmpeg-cvslog
section regarding discussion of patches and of
development issues. Reflect wording from discussion at
<https://ffmpeg.org/pipermail/ffmpeg-devel/2017-November/221199.html>
but with copy-editing to make wording more concise.
2. In doc/developer.texi, rewrite the ffmpeg-cvslog section
to match the current usage of ffmpeg-cvslog. Some
developers choose to follow this list, but it is not
mandatory.
There are a lot of improvements possible to the
Developer Documentation page, beyond this refactoring.
However, making those improvements is a much bigger
and more difficult task. This change is "low hanging
fruit".
Signed-off-by: Jim DeLaHunt <from.ffmpeg-dev@jdlh.com>
Signed-off-by: Timothy Gu <timothygu99@gmail.com>
Previously, the Developer Documentation <ffmpeg.org/developer.html>
contained a single chapter, "1. Developer Guide," with all content under
that single chapter. Thus the document structure was one level deeper
and more complicated than it needed to be. It differed from similar
documents such as /faq.html, which have multiple chapters.
Eliminate the single chapter, and promote each section underneath to
chapter, and each subsection to section. Thus content and relative
structure remains the same, but the overall structure is simpler.
Anchors within the page remain the same.
Signed-off-by: Jim DeLaHunt <from.ffmpeg-dev@jdlh.com>
Signed-off-by: Timothy Gu <timothygu99@gmail.com>
This is to take full advantage of Common Media Application Format(CMAF).
Now server can generate one content and serve both HLS and DASH players.
Reviewed-by: Steven Liu <lq@onvideo.cn>
Corpus VBR mode is a variant of standard VBR where the complexity
distribution midpoint is passed in rather than calculated for a specific
clip or chunk.
The valid range is [0, 10000]. 0 (default) uses standard VBR.
Signed-off-by: James Zern <jzern@google.com>
Supports only raw NV12 input.
Example use:
./vaapi_encode 1920 1080 test.yuv test.h264
Signed-off-by: Jun Zhao <jun.zhao@intel.com>
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>
Signed-off-by: Mark Thompson <sw@jkqxz.net>
The present value name for maximum thickness is 'max' which results in a
parse error of any thickness expression containing 'max(val1,val2)'.
Value renamed to 'fill'. Tested locally and documented.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This was added in early 2013 and abandoned several months later; as far as
I can tell, there are no external users. Future OpenCL use will be via
hwcontext, which requires neither special OpenCL-only API nor global state
in libavutil.
All internal users are also deleted - this is just the unsharp filter
(replaced by unsharp_opencl, which is more flexible) and the deshake filter
(no replacement).
Add two FAQs about running FFmpeg in the background.
The first explains the use of the -nostdin option in
a straightforward way. Text revised based on review.
The second FAQ starts from a confusing error message,
and leads to the solution, use of the -nostdin option.
The purpose of the second FAQ is to attract web searches
from people having the problem, and offer them a solution.
Add an anchor to the Main Options section of the ffmpeg
documentation, so that the FAQs can link directly there.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
For some strange reason "-t" option was only implemented
for input files while both "-t" and "-to" were available
for use for output files. This made extracting a range from
input file inconvenient.
This patch enables -to option for input so one can do
ffmpeg -ss 1:23:20 -to 1:27:22.3 -i myinput.mkv ...
Signed-off-by: Vitaly _Vi Shukela <vi0oss@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This can reduce latency and increase throughput, particularly on high
latency networks.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Reviewed-by: Jeyapal, Karthick <kjeyapal@akamai.com>
* commit '0e702124ee149593168cbbb7b30376249a64ae66':
doc: Provide better examples for hls and segment muxing
Merged-by: James Almer <jamrial@gmail.com>
* commit 'b46a77f19ddc4b2b5fa3187835ceb602a5244e24':
lavc: external hardware frame pool initialization
Includes the fix from e724bdfffb
Merged-by: James Almer <jamrial@gmail.com>
The user-supplied value for timecode_rate in drawtext is rounded
to nearest integer. So, a supplied value of 0.49 or lower is rounded to 0.
This throws a misleading error message which says "Timecode frame rate must be
specified". Changed message to account for values under one.
Also noted supported framerates for drop TC.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
suppress the "warning: assignment discards ‘const’ qualifier from
pointer target type" build warning.
Signed-off-by: Jun Zhao <jun.zhao@intel.com>
Reviewed-by: Steven Liu <lingjiujianke@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
suppress the "warning: assignment discards ‘const’ qualifier from
pointer target type" build warning.
Signed-off-by: Jun Zhao <jun.zhao@intel.com>
Reviewed-by: Steven Liu <lingjiujianke@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
suppress the "warning: assignment discards ‘const’ qualifier from
pointer target type" build warning.
Signed-off-by: Jun Zhao <jun.zhao@intel.com>
Reviewed-by: Steven Liu <lingjiujianke@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* commit 'b5f19f7478492307e4b4763aeac3180faf50e17f':
aac: Split function to parse ADTS header data into public and private part
Merged-by: James Almer <jamrial@gmail.com>
* commit 'ecc5c4db2dd3a0f328d95df89daa59f78b4b2810':
doc/examples/output: Cast pointer to the right (const) type
Merged-by: James Almer <jamrial@gmail.com>
This patch enables paletteuse to identify the transparency in incoming
video and tag transparent pixels on outgoing video with the correct
index from the palette.
This requires tracking the transparency index in the palette,
establishing an alpha threshold below which a pixel is considered
transparent and above which the pixel is considered opaque, and
additional changes to track the alpha value throughout the conversion
process.
This change is a partial fix for https://trac.ffmpeg.org/ticket/4443
However, animated GIFs are still output incorrectly due to a bug
in gif optimization which does not correctly handle transparency.
Signed-off-by: Clément Bœsch <u@pkh.me>