mirror of
https://github.com/FFmpeg/FFmpeg.git
synced 2024-11-26 19:01:44 +02:00
Mirror of https://git.ffmpeg.org/ffmpeg.git
5d407cb2d7
When no timestamps are available from the container, the video decoding code will currently use fake dts values - generated in process_input_packet() based on a combination of information from the decoder and the parser (obtained via the demuxer) - to generate timestamps during decoder flushing. This is fragile, hard to follow, and unnecessarily convoluted, since more reliable information can be obtained directly from post-decoding values. The new code keeps track of the last decoded frame pts and estimates its duration based on a number of heuristics. Timestamps generated when both pts and pkt_dts are missing are then simple pts+duration of the last frame. The heuristics are somewhat complicated by the fact that lavf insists on making up packet timestamps based on its highly incomplete information. That should be removed in the future, allowing to further simplify this code. The results of the following tests change: * h264-3386 now requires -fps_mode passthrough to avoid dropping frames at the end; this is a pathology of the interaction of the new and old code, and the fact that the sample switches from field to frame coding in the last packet, and will be fixed in following commits * hevc-conformance-DELTAQP_A_BRCM_4 stops inventing an arbitrary timestamp gap at the end * hevc-small422chroma - the single frame output by this test now has a timestamp of 0, rather than an arbitrary 7 |
||
---|---|---|
compat | ||
doc | ||
ffbuild | ||
fftools | ||
libavcodec | ||
libavdevice | ||
libavfilter | ||
libavformat | ||
libavutil | ||
libpostproc | ||
libswresample | ||
libswscale | ||
presets | ||
tests | ||
tools | ||
.gitattributes | ||
.gitignore | ||
.mailmap | ||
.travis.yml | ||
Changelog | ||
configure | ||
CONTRIBUTING.md | ||
COPYING.GPLv2 | ||
COPYING.GPLv3 | ||
COPYING.LGPLv2.1 | ||
COPYING.LGPLv3 | ||
CREDITS | ||
INSTALL.md | ||
LICENSE.md | ||
MAINTAINERS | ||
Makefile | ||
README.md | ||
RELEASE |
FFmpeg README
FFmpeg is a collection of libraries and tools to process multimedia content such as audio, video, subtitles and related metadata.
Libraries
libavcodec
provides implementation of a wider range of codecs.libavformat
implements streaming protocols, container formats and basic I/O access.libavutil
includes hashers, decompressors and miscellaneous utility functions.libavfilter
provides means to alter decoded audio and video through a directed graph of connected filters.libavdevice
provides an abstraction to access capture and playback devices.libswresample
implements audio mixing and resampling routines.libswscale
implements color conversion and scaling routines.
Tools
- ffmpeg is a command line toolbox to manipulate, convert and stream multimedia content.
- ffplay is a minimalistic multimedia player.
- ffprobe is a simple analysis tool to inspect multimedia content.
- Additional small tools such as
aviocat
,ismindex
andqt-faststart
.
Documentation
The offline documentation is available in the doc/ directory.
The online documentation is available in the main website and in the wiki.
Examples
Coding examples are available in the doc/examples directory.
License
FFmpeg codebase is mainly LGPL-licensed with optional components licensed under GPL. Please refer to the LICENSE file for detailed information.
Contributing
Patches should be submitted to the ffmpeg-devel mailing list using
git format-patch
or git send-email
. Github pull requests should be
avoided because they are not part of our review process and will be ignored.