This also adds support to avconv (which is trivial due to the new
hwaccel API being generic enough).
The new decoder setup code in dxva2.c is significantly based on work by
Steve Lhomme <robux4@gmail.com>, but with heavy changes/rewrites.
Merges Libav commit f9e7a2f95a.
Also adds untested VP9 support.
The check for DXVA2 COBJs is removed. Just update your MinGW to
something newer than a 5 year old release.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
To be used with the new d3d11 hwaccel decode API.
With the new hwaccel API, we don't want surfaces to depend on the
decoder (other than the required dimension and format). The old D3D11VA
pixfmt uses ID3D11VideoDecoderOutputView pointers, which include the
decoder configuration, and thus is incompatible with the new hwaccel
API. This patch introduces AV_PIX_FMT_D3D11, which uses ID3D11Texture2D
and an index. It's simpler and compatible with the new hwaccel API.
The introduced hwcontext supports only the new pixfmt.
Frame upload code untested.
Significantly based on work by Steve Lhomme <robux4@gmail.com>, but with
heavy changes/rewrites.
Merges Libav commit fff90422d1.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
If flushing is not disabled, then mux.c will signal the end of the packets with
an AVIO_DATA_MARKER_FLUSH_POINT, and aviobuf will be able to decide to flush or
not based on the preferred minimum packet size set by the used protocol.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Marton Balint <cus@passwd.hu>
Takes a raw input stream containing frames with correct timestamps but
possibly out of order and inserts additional show-existing-frame
packets to correct the ordering.
(cherry picked from commit 34e051d168)
(cherry picked from commit b43b95f478)
Also converted from bitstream to get_bits.
NASM is more actively maintained and permits generating dependency information
as a sideeffect of assembling, thus cutting build times in half.
(Cherry-picked from libav commit 57b753b445)
Signed-off-by: James Almer <jamrial@gmail.com>
According to libavfilter/scale.c, if the width and height are both
less than or equal to 0 then the input size is used for both
dimensions. It does not need to be -1. -1:-1 is the same as 0:0 which
is the same as -10:-42, etc.
if (w < 0 && h < 0)
eval_w = eval_h = 0;
The documentation for the zscale filter has also been updated since the
behavior is identical.
Signed-off-by: Kevin Mark <kmark937@gmail.com>
Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>
Use the flags argument of av_hwframe_ctx_create_derived() to pass the
mapping flags which will be used on allocation. Also, set the format
and hardware context on the allocated frame automatically - the user
should not be required to do this themselves.
(cherry picked from commit c5714b51aa)
This only supports one device globally, but more can be used by
passing them with input streams in hw_frames_ctx or by deriving new
devices inside a filter graph with hwmap.
(cherry picked from commit e669db7610)
Adds functions to convert to/from strings and a function to iterate
over all supported device types. Also adds a new invalid type
AV_HWDEVICE_TYPE_NONE, which acts as a sentinel value.
(cherry picked from commit b7487f4f3c)
This also adds support to avconv (which is trivial due to the new
hwaccel API being generic enough).
The new decoder setup code in dxva2.c is significantly based on work by
Steve Lhomme <robux4@gmail.com>, but with heavy changes/rewrites.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
To be used with the new d3d11 hwaccel decode API.
With the new hwaccel API, we don't want surfaces to depend on the
decoder (other than the required dimension and format). The old D3D11VA
pixfmt uses ID3D11VideoDecoderOutputView pointers, which include the
decoder configuration, and thus is incompatible with the new hwaccel
API. This patch introduces AV_PIX_FMT_D3D11, which uses ID3D11Texture2D
and an index. It's simpler and compatible with the new hwaccel API.
The introduced hwcontext supports only the new pixfmt.
Frame upload code untested.
Significantly based on work by Steve Lhomme <robux4@gmail.com>, but with
heavy changes/rewrites.
Signed-off-by: Diego Biurrun <diego@biurrun.de>
We have floor, ceil, and trunc. Let's add round.
Signed-off-by: Kevin Mark <kmark937@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Variables pertaining to the main video are now available when
using the scale2ref filter. This allows, as an example, scaling a
video with another as a reference point while maintaining the
original aspect ratio of the primary/non-reference video.
Consider the following graph: scale2ref=iw/6:-1 [main][ref]
This will scale [main] to 1/6 the width of [ref] while maintaining
the aspect ratio. This works well when the AR of [ref] is equal to
the AR of [main] only. What the above filter really does is
maintain the AR of [ref] when scaling [main]. So in all non-same-AR
situations [main] will appear stretched or compressed to conform to
the same AR of the reference video. Without doing this calculation
externally there is no way to scale in reference to another input
while maintaining AR in libavfilter.
To make this possible, we introduce eight new constants to be used
in the w and h expressions only in the scale2ref filter:
* main_w/main_h: width/height of the main input video
* main_a: aspect ratio of the main input video
* main_sar: sample aspect ratio of the main input video
* main_dar: display aspect ratio of the main input video
* main_hsub/main_vsub: horiz/vert chroma subsample vals of main
* mdar: a shorthand alias of main_dar
Of course, not all of these constants are needed for maintaining the
AR, but adding additional constants in line of what is available for
in/out allows for other scaling possibilities I have not imagined.
So to now scale a video to 1/6 the size of another video using the
width and maintaining its own aspect ratio you can do this:
scale2ref=iw/6:ow/mdar [main][ref]
This is ideal for picture-in-picture configurations where you could
have a square or 4:3 video overlaid on a corner of a larger 16:9
feed all while keeping the scaled video in the corner at its correct
aspect ratio and always the same size relative to the larger video.
I've tried to re-use as much code as possible. I could not find a way
to avoid duplication of the var_names array. It must now be kept in
sync with the other (the normal one and the scale2ref one) for
everything to work which does not seem ideal. For every new variable
introduced/removed into/from the normal scale filter one must be
added/removed to/from the scale2ref version. Suggestions on how to
avoid var_names duplication are welcome.
var_values has been increased to always be large enough for the
additional scale2ref variables. I do not forsee this being a problem
as the names variable will always be the correct size. From my
understanding of av_expr_parse_and_eval it will stop processing
variables when it runs out of names even though there may be
additional (potentially uninitialized) entries in the values array.
The ideal solution here would be using a variable-length array but
that is unsupported in C90.
This patch does not remove any functionality and is strictly a
feature patch. There are no API changes. Behavior does not change for
any previously valid inputs.
The applicable documentation has also been updated.
Signed-off-by: Kevin Mark <kmark937@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The library has stopped being developed and Debian has removed it
from its repositories citing security issues.
The native Dirac decoder supports everything the library has and basic
encoding support is still provided via the native vc2 (Dirac Pro, intra
only version of Dirac) encoder. Hence, there's no reason to still support
linking to the library and potentially leading users into security issues.
See http://lists.ffmpeg.org/pipermail/ffmpeg-user/2017-April/035975.html
Parsed_filter_X could remain and user can override it with custom one.
Example:
ffplay -f lavfi "nullsrc=s=640x360,
sendcmd='1 drawtext@top reinit text=Hello; 2 drawtext@bottom reinit text=World',
drawtext@top=x=16:y=16:fontsize=20:fontcolor=Red:text='',
drawtext@bottom=x=16:y=340:fontsize=16:fontcolor=Blue:text=''"
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Muhammad Faiz <mfcc64@gmail.com>
This adds tons of code for no other benefit than making VideoToolbox
support conform with the new hwaccel API (using hw_device_ctx and
hw_frames_ctx).
Since VideoToolbox decoding does not actually require the user to
allocate frames, the new code does mostly nothing.
One benefit is that ffmpeg_videotoolbox.c can be dropped once generic
hwaccel support for ffmpeg.c is merged from Libav.
Does not consider VDA or VideoToolbox encoding.
Fun fact: the frame transfer functions are copied from vaapi, as the
mapping makes copying generic boilerplate. Mapping itself is not
exported by the VT code, because I don't know how to test.
add a per-stream option for setting the encoder timebase.
the following values are allowed:
0 - for video, use 1/frame_rate, for audio use 1/sample_rate (this is
the default)
-1 - match the input timebase (when possible)
>0 - set the timebase to provided number
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* commit '019ab88a95cb31b698506d90e8ce56695a7f1cc5':
lavc: add an option for exporting cropping information to the caller
Merged-by: James Almer <jamrial@gmail.com>
It's been addressed.
Reviewed-by: Hendrik Leppkes <h.leppkes@gmail.com>
Reviewed-by: Aaron Levinson <alevinsn@aracnet.com>
Signed-off-by: James Almer <jamrial@gmail.com>
* commit 'e435beb1ea5380a90774dbf51fdc8c941e486551':
crypto: consistently use size_t as type for length parameters
Merged-by: Clément Bœsch <cboesch@gopro.com>
This is a newer API that is intended for decoders like the cuvid
wrapper. Until now, the wrapper required to set an awkward
"incomplete" hw_frames_ctx to set the device. Now the device
can be set directly, and the user can get AV_PIX_FMT_CUDA output
for a specific device simply by setting hw_device_ctx.
This still does a dummy ff_get_format() call at init time, and should
be fully backward compatible.
* commit '11a9320de54759340531177c9f2b1e31e6112cc2':
build: Move build-system-related helper files to a separate subdirectory
"ffbuild" directory name is used instead of "avbuild".
Merged-by: Clément Bœsch <u@pkh.me>
This complex (-1 2 6 2 -1) filter slightly less reduces interlace 'twitter' but better retain detail and subjective sharpness impression compared to the linear (1 2 1) filter.
Signed-off-by: Thomas Mundt <tmundt75@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
This makes the currently semi-public avpriv_aac_parse_header() function
private to libavcodec and adds a proper public API function to return
the parts of the ADTS header required in libavformat.
Use the flags argument of av_hwframe_ctx_create_derived() to pass the
mapping flags which will be used on allocation. Also, set the format
and hardware context on the allocated frame automatically - the user
should not be required to do this themselves.
This only supports one device globally, but more can be used by
passing them with input streams in hw_frames_ctx or by deriving new
devices inside a filter graph with hwmap.
* commit 'fa1749dd34c55fb997c97dfc4da9383c9976ab91':
vp9: split superframes in the filtering stage before actual decoding
This commit is a noop.
2017-04-24 20:45:04 @ubitux BBB: btw, do you think you can get the bsf thing this week or we should skip it to give you more time and go on with the merges?
2017-04-24 20:45:20 @BBB I’m not sure I’ll finish it that soon
2017-04-24 20:45:26 @BBB I’d skip it and leave it for later
2017-04-24 20:45:35 @BBB I’ll do it, I promise, but I Can’t guarantee it’ll be done by $date
Merged-by: Clément Bœsch <u@pkh.me>
- Fixed a typo for the -sources argument
Signed-off-by: Mickael Maison <mickael.maison@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Added parameter descriptions for all functions
and converted in-function comments into regular
(non-Doxygen) comments.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
or if x/y go beyond padded area.
This is mostly useful when paired with the aspect option.
Defaults aren't changed.
Idea for this was taken from mpv's soon-to-be-removed expand vf.
Reviewed-by: Paul B Mahol <onemda@gmail.com>
* commit '286ab878bd39b56008035638227b3ecb8ec5bbb7':
fate.sh: Allow setting other make flags for running tests
Merged-by: James Almer <jamrial@gmail.com>
* commit 'f78d360bba6dcfb585847a49a84e89c25950fbdb':
examples/decode_video: use a parser for splitting the input
Merged-by: James Almer <jamrial@gmail.com>
* commit '5f102a9559099429826e84758b8b5182244c52db':
examples/encode_video: switch to the new encoding API
Merged-by: Clément Bœsch <cboesch@gopro.com>
* commit '3d66717f7cb5555257244be8f5bce172ed3af7ac':
examples/decode_audio: use the new audio decoding API
Merged-by: Clément Bœsch <cboesch@gopro.com>
* commit '0946c754d99c05413e813ee515039adcf0f9232a':
examples/decode_audio: use a parser for splitting the input
Merged-by: Clément Bœsch <cboesch@gopro.com>
* commit '0b5a26e8bcd219efe5da3a6d39b588fabf91f2b9': (35 commits)
qdm2: Convert to the new bitstream reader
qcelp: Convert to the new bitstream reader
pcx: Convert to the new bitstream reader
opus: Convert to the new bitstream reader
nellymoser: Convert to the new bitstream reader
jvdec: Convert to the new bitstream reader
hqx: Convert to the new bitstream header
hq_hqa: Convert to the new bitstream reader
gsm: Convert to the new bitstream reader
g72x: Convert to the new bitstream reader
g2meet: Convert to the new bitstream reader
fraps: Convert to the new bitstream reader
flashsv: Convert to the new bitstream reader
faxcompr: Convert to the new bitstream reader
exr: Convert to the new bitstream reader
escape130: Convert to the new bitstream reader
escape124: Convert to the new bitstream reader
dvdsubdec: Convert to the new bitstream reader
dss_sp: Convert to the new bitstream reader
cook: Convert to the new bitstream reader
...
This merge is a noop, see
http://ffmpeg.org/pipermail/ffmpeg-devel/2017-April/209609.html
Merged-by: Clément Bœsch <u@pkh.me>
Takes a raw input stream containing frames with correct timestamps but
possibly out of order and inserts additional show-existing-frame
packets to correct the ordering.
This is an example, people will copy and use this. The maximum supported is quite
unreasonable as a default choice
Reviewed-by: Steven Liu <lingjiujianke@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* commit '5b4d7ac7ae5d821cfa6ab89f8eab4d31851ef32c':
examples/encode_video: use the AVFrame API for allocating the frame
Merged-by: Clément Bœsch <u@pkh.me>
* commit '7b1f03477f1a43d2261fbd83e50a4ad90c7f806d':
examples/avcodec: split the remaining two examples into separate files
Merged-by: Clément Bœsch <u@pkh.me>
* commit 'f5df897c4b61985e3afc89ba1290649712ff438e':
examples/avcodec: split audio decoding into a separate example
Merged-by: Clément Bœsch <u@pkh.me>
* commit 'f76698e759a08e8d3b629c06edb0439f474e7fee':
examples/encode_audio: use the AVFrame API for allocating the data
Merged-by: Clément Bœsch <u@pkh.me>
* commit '40aaa8dadfd1c69ff4460d04750e1403b5535a6d':
examples/avcodec: split audio encoding into a separate example
Merged-by: Clément Bœsch <u@pkh.me>
* commit 'c454dfcff90f0ed39c7b0d4e85664986a8b4476c':
Use ISO C printf conversion specifiers where appropriate
This commit is a noop, an equivalent patch is currently under review on
the mailing-list: http://ffmpeg.org/pipermail/ffmpeg-devel/2017-March/209239.html
Merged-by: Clément Bœsch <u@pkh.me>
The old "API" that signaled rotation as a metadata value has been
replaced by DISPLAYMATRIX side data quite a while ago.
There is no reason to make muxers/demuxers/API users support both. In
addition, the metadata API is dangerous, as user tags could "leak" into
it, creating unintended features or bugs.
ffmpeg CLI has to be updated to use the new API. In particular, we must
not allow to leak the "rotate" tag into the muxer. Some muxers will
catch this properly (like mov), but others (like mkv) can add it as
generic tag. Note applications, which use libavformat and assume the
old rotate API, will interpret such "rotate" user tags as rotate
metadata (which it is not), and incorrectly rotate the video.
The ffmpeg/ffplay tools drop the use of the old API for muxing and
demuxing, as all muxers/demuxers support the new API. This will mean
that the tools will not mistakenly interpret per-track "rotate" user
tags as rotate metadata. It will _not_ be treated as regression.
Unfortunately, hacks have been added, that allow the user to override
rotation by setting metadata explicitly, e.g. via
-metadata:s:v:0 rotate=0
See references to trac #4560. fate-filter-meta-4560-rotate0 tests this.
It's easier to adjust the hack for supporting it than arguing for its
removal, so ffmpeg CLI now explicitly catches this case, and essentially
replaces the "rotate" value with a display matrix side data. (It would
be easier for both user and implementation to create an explicit option
for rotation.)
When the code under FF_API_OLD_ROTATE_API is disabled, one FATE
reference file has to be updated (because "rotate" is not exported
anymore).
Tested-by: Michael Niedermayer <michael@niedermayer.cc>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
* commit '6d5636ad9ab6bd9bedf902051d88b7044385f88b':
hevc: x86: Add add_residual() SIMD optimizations
See a6af4bf64d
This merge is only cosmetics (renames, space shuffling, etc).
The functionnal changes in the ASM are *not* merged:
- unrolling with %rep is kept
- ADD_RES_MMX_4_8 is left untouched: this needs investigation
Merged-by: Clément Bœsch <u@pkh.me>
* commit '043b0b9fb1481053b712d06d2c5b772f1845b72b':
Replace leftover uses of -aframes|-dframes|-vframes with -frames:a|d|v
The merge also includes all our own occurences.
Merged-by: Clément Bœsch <u@pkh.me>
* commit '89b35a139e838deeb32ec20d8d034c81014401d0':
lavc: add a bitstream filter for extracting extradata from packets
Merged-by: James Almer <jamrial@gmail.com>
* commit '5cc0057f4910c8c72421b812c8f337ef6c43696c':
lavu: remove the custom atomic API
This commit is a noop. The removal is postponed until all usages in
FFmpeg are dropped as well. A patchset is on discussion on the
mailing-list:
http://ffmpeg.org/pipermail/ffmpeg-devel/2017-March/209003.html
Merged-by: Clément Bœsch <u@pkh.me>
This supports retrieving the device from a provided hw_frames_ctx, and
automatically creating a hw_frames_ctx if hw_device_ctx is set.
The old API is not deprecated yet. The user can still use
av_vdpau_bind_context() (with or without setting hw_frames_ctx), or use
the API before that by allocating and setting hwaccel_context manually.
Cherry-picked from Libav commit 1a7ddba5.
(Adds missing APIchanges entry to the Libav version.)
Reviewed-by: Mark Thompson <sw@jkqxz.net>
This "reuses" the flags introduced for the av_vdpau_bind_context() API
function, and makes them available to all hwaccels. This does not affect
the current vdpau API, as av_vdpau_bind_context() should obviously
override the AVCodecContext.hwaccel_flags flags for the sake of
compatibility.
Cherry-picked from Libav commit 16a163b5.
Reviewed-by: Mark Thompson <sw@jkqxz.net>
* commit '8ea35af7620e4f73f9e8c072e1c0fac9a04ec161':
avio: add a new flag for marking streams seekable by timestamp
Merged-by: James Almer <jamrial@gmail.com>
* commit '2ec9fa5ec60dcd10e1cb10d8b4e4437e634ea428':
idct: Change type of array stride parameters to ptrdiff_t
Merged-by: James Almer <jamrial@gmail.com>
* commit '67d28f4a0fbb52d0734ca3682b85035e96d294fb':
examples/output: switch to the new encoding API
This commit is a noop, our examples are different. Still, we need to
update them to the new API, so doc/libav-merge.txt is updated.
Merged-by: Clément Bœsch <u@pkh.me>
This patch deprecates anything that has to do with merging/splitting
side data. Automatic side data merging (and splitting), as well as all
API symbols involved in it, are removed completely.
Two FF_API_ defines are dedicated to deprecating API symbols related to
this: FF_API_MERGE_SD_API removes av_packet_split/merge_side_data in
libavcodec, and FF_API_LAVF_KEEPSIDE_FLAG deprecates
AVFMT_FLAG_KEEP_SIDE_DATA in libavformat.
Since it was claimed that changing the default from merging side data to
not doing it is an ABI change, there are two additional FF_API_ defines,
which stop using the side data merging/splitting by default (and remove
any code in avformat/avcodec doing this): FF_API_MERGE_SD in libavcodec,
and FF_API_LAVF_MERGE_SD in libavformat.
It is very much intended that FF_API_MERGE_SD and FF_API_LAVF_MERGE_SD
are quickly defined to 0 in the next ABI bump, while the API symbols are
retained for a longer time for the sake of compatibility.
AVFMT_FLAG_KEEP_SIDE_DATA will (very much intentionally) do nothing for
most of the time it will still be defined. Keep in mind that no code
exists that actually tries to unset this flag for any reason, nor does
such code need to exist. Code setting this flag explicitly will work as
before. Thus it's ok for AVFMT_FLAG_KEEP_SIDE_DATA to do nothing once
side data merging has been removed from libavformat.
In order to avoid that anyone in the future does this incorrectly, here
is a small guide how to update the internal code on bumps:
- next ABI bump (probably soon):
- define FF_API_LAVF_MERGE_SD to 0, and remove all code covered by it
- define FF_API_MERGE_SD to 0, and remove all code covered by it
- next API bump (typically two years in the future or so):
- define FF_API_LAVF_KEEPSIDE_FLAG to 0, and remove all code covered
by it
- define FF_API_MERGE_SD_API to 0, and remove all code covered by it
This forces anyone who actually wants packet side data to temporarily
use deprecated API to get it all. If you ask me, this is batshit fucked
up crazy, but it's how we roll. Making AVFMT_FLAG_KEEP_SIDE_DATA to be
set by default was rejected as an ABI change, so I'm going all the way
to get rid of this once and for all.
Reviewed-by: James Almer <jamrial@gmail.com>
Reviewed-by: Rostislav Pehlivanov <atomnuker@gmail.com>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
This "reuses" the flags introduced for the av_vdpau_bind_context() API
function, and makes them available to all hwaccels. This does not affect
the current vdpau API, as av_vdpau_bind_context() should obviously
override the AVCodecContext.hwaccel_flags flags for the sake of
compatibility.
Adds functions to convert to/from strings and a function to iterate
over all supported device types. Also adds a new invalid type
AV_HWDEVICE_TYPE_NONE, which acts as a sentinel value.
This filter does not implement all features of MPEG7. Missing features:
- compression of signature files
- work only on (cropped) parts of the video
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* commit '75d642a944d5579e4ef20ff3701422a64692afcf':
vaapi_vp8: Explicitly include libva vp8 decode header
vaapi_decode: Ignore the profile when not useful
lavc/vaapi: Add VP8 decode hwaccel
vp8: Add hwaccel hooks
This merge is a noop as these commits are already under review on the
mailing list. doc/libav-merge.txt is updated to track its progress.
Merged-by: Clément Bœsch <u@pkh.me>
* commit 'd7bc52bf456deba0f32d9fe5c288ec441f1ebef5':
imgutils: add a function for copying image data from GPU mapped memory
Merged-by: Clément Bœsch <u@pkh.me>
Not used by anything at all since we don't auto insert lavr filters.
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
The 'sqrt' and 'cbrt' scalers were added in commit
80262d8c86, but their symbolic option values
only made available to the showwaves filter, not showwavespic, despite
the scalers working properly by their numerical option values.
Signed-off-by: Moritz Barsnick <barsnick@gmx.net>
Decodes YUV 4:2:2 10-bit and RGB 12-bit files.
Older files with more subbands, skips, Bayer, alpha not supported.
Further fixes and refactorings by Anton Khirnov <anton@khirnov.net>,
Diego Biurrun <diego@biurrun.de>, Vittorio Giovara <vittorio.giovara@gmail.com>
Signed-off-by: Diego Biurrun <diego@biurrun.de>
Be more careful when an input stream encounters EOF when its filtergraph
has not been configured yet. The current code would immediately mark the
corresponding output streams as finished, while there may still be
buffered frames waiting for frames to appear on other filtergraph
inputs.
This should fix the random FATE failures for complex filtergraph tests
after a3a0230a98
This merges Libav commit 94ebf55. It was previously skipped.
This is the last filter init related Libav commit that was skipped, so
this also removes the commits from doc/libav-merge.txt.
Signed-off-by: wm4 <nfxjfg@googlemail.com>
If AVVideotoolboxContext.cv_pix_fmt_type is set to 0, don't set the
kCVPixelBufferPixelFormatTypeKey value on the VT decoder.
This makes VT output its native format, which can be much faster on
some hardware iterations (if the native format does not match with
the requested format, it will be converted, which is slow).
The default is still forcing nv12.
Allow all struct fields to be accessed directly, as long as they're
public.
Before this change, many fields were "public", but could be accessed via
AVOption only. This meant they were effectively not public, but were
present for documentation purposes, which was incredibly confusing at
best.
This is an extended version of the AVFrame.opaque field, which can be
used to attach arbitrary user information to an AVFrame.
The usefulness of the opaque field is rather limited, because it can
store only up to 32 bits of information (or 64 bit on 64 bit systems).
It's not possible to set this field to a memory allocation, because
there is no way to deallocate it correctly.
The opaque_ref field circumvents this by letting the user set an
AVBuffer, which makes the user data refcounted.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Merges Libav commit 04f3bd3496.
This is an extended version of the AVFrame.opaque field, which can be
used to attach arbitrary user information to an AVFrame.
The usefulness of the opaque field is rather limited, because it can
store only up to 32 bits of information (or 64 bit on 64 bit systems).
It's not possible to set this field to a memory allocation, because
there is no way to deallocate it correctly.
The opaque_ref field circumvents this by letting the user set an
AVBuffer, which makes the user data refcounted.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
When user use the hls_wrap, there have many problem:
1. some platform refersh the old but usefull segment
2. CDN(Content Delivery Network) Deliver HLS not friendly
The hls_wrap is used to wrap segments for use little space,
now user can use hls_list_size and hls_flags delete_segments
instead it.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Reviewed-by: Carl Eugen Hoyos <ceffmpeg@gmail.com>
Signed-off-by: Steven Liu <lq@chinaffmpeg.org>