This is possible now that the next-API is gone.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Signed-off-by: James Almer <jamrial@gmail.com>
RTCP synchronization packet was broken since commit in ffmpeg version > 2.8.3
(commit: e04b039b15) Since this commit (2e814d0329)
"rtpenc: Simplify code by introducing a macro for rescaling NTP timestamps", NTP_TO_RTP_FORMAT
uses av_rescale_rnd() function to add the data to the packet.
This causes an overflow in the av_rescale_rnd() function and it will return INT64_MIN.
Causing the NTP stamp in the RTCP packet to have an invalid value.
Github: Closes#182
Reverting commit '2e814d0329aded98c811d0502839618f08642685' solves the problem.
Currently, AVStream contains an embedded AVCodecContext instance, which
is used by demuxers to export stream parameters to the caller and by
muxers to receive stream parameters from the caller. It is also used
internally as the codec context that is passed to parsers.
In addition, it is also widely used by the callers as the decoding (when
demuxer) or encoding (when muxing) context, though this has been
officially discouraged since Libav 11.
There are multiple important problems with this approach:
- the fields in AVCodecContext are in general one of
* stream parameters
* codec options
* codec state
However, it's not clear which ones are which. It is consequently
unclear which fields are a demuxer allowed to set or a muxer allowed to
read. This leads to erratic behaviour depending on whether decoding or
encoding is being performed or not (and whether it uses the AVStream
embedded codec context).
- various synchronization issues arising from the fact that the same
context is used by several different APIs (muxers/demuxers,
parsers, bitstream filters and encoders/decoders) simultaneously, with
there being no clear rules for who can modify what and the different
processes being typically delayed with respect to each other.
- avformat_find_stream_info() making it necessary to support opening
and closing a single codec context multiple times, thus
complicating the semantics of freeing various allocated objects in the
codec context.
Those problems are resolved by replacing the AVStream embedded codec
context with a newly added AVCodecParameters instance, which stores only
the stream parameters exported by the demuxers or read by the muxers.
* commit '4f6cd883f06f7893a2b60a41e7a4f8ae633dac2f':
rtpenc: Don't set max_frames_per_packet based on the packet frame size or frame rate
Merged-by: Michael Niedermayer <michaelni@gmx.at>
* commit 'f8c01257f93ceda3e03bc4e540a51022d1e2bff2':
rtpenc: Always do the default initialization regardless of codecs
Merged-by: Michael Niedermayer <michaelni@gmx.at>
* commit '1fc64e2e07787bbca82a72c146588e850e6d098a':
rtpenc: Write conditional statements on separate lines
Merged-by: Michael Niedermayer <michaelni@gmx.at>
* commit '0662440b991361fdb5e732712d997a73e4692e34':
rtpenc_aac: Set a default value for max_frames_per_packet at init
Merged-by: Michael Niedermayer <michaelni@gmx.at>
Instead check the timestamps while muxing, to avoid buffering a
too long timestamp range into one single packet.
This makes the AMR and AAC packetization slightly less efficient,
since we set a possibly unnecessarily high max_frames_per_packet.
(These packetizers end up doing a memmove of the TOC bytes if
sending a packet before max_frames_per_packet is achieved, and
we end up setting max_frames_per_packet to a value that should
be high enough for most uses.)
All packetizers that use max_frames_per_packet now set it either
to a default value, or to a value calculated based on other
parameters, so none of them rely on the previous default setting.
For iLBC, copy one frame at a time, to allow checking the timestamp
range for each of them - basically doing potentially multiple
loops to simplify the code instead of trying to calculate the
number of frames to buffer while honoring s1->max_delay.
This is in preparation for reducing the coupling between libavformat
and libavcodec, by not having the muxers use the encoder field
frame_size (which may not be available during e.g. stream copy).
Signed-off-by: Martin Storsjö <martin@martin.st>
This avoids having to jump to the defaultcase in the switch. Manually
override the stream time base back to 90 kHz for the few audio codecs
that don't use the sample rate as time base (mp2, mp3).
Signed-off-by: Martin Storsjö <martin@martin.st>
They share a great deal of common structure; only a few minor
bits in the headers differ.
This also fixes an off-by-one in sending of the last fragment
of large HEVC nals (where it previously sent len+2 bytes, even
if it should have been len+RTP_HEVC_HEADERS_SIZE aka len+3).
Signed-off-by: Martin Storsjö <martin@martin.st>
The packetizer only supports splitting at GOB headers - if
such aren't available frequently enough, it splits at any
random byte offset (not at a macroblock boundary either, which
would be allowed by the spec) and sends a payload header pretend
that it starts with a GOB header.
As long as a receiver doesn't try to handle such cases cleverly
but just drops broken frames, this shouldn't matter too much
in practice.
Signed-off-by: Martin Storsjö <martin@martin.st>
Instead explicitly jump to the default case in the cases where
it is wanted, and avoid fallthrough between different codecs,
which could easily introduce bugs if people editing the code
aren't careful.
Signed-off-by: Martin Storsjö <martin@martin.st>
* commit '01f251c44d83eedc819625d2caac9ff9697a085d':
rtpenc: Set the timestamp properly when sending mpegts data, too
Merged-by: Michael Niedermayer <michaelni@gmx.at>
In particular, when packetizing mpegts into rtp, the input packet
timestamp may come from more than one stream, which could cause
multiple packets be written with the same timestamp.
Signed-off-by: Martin Storsjö <martin@martin.st>