The code relies on their validity and otherwise can try to access a NULL
object->rle pointer, causing segmentation faults.
Signed-off-by: Andreas Cadhalpun <Andreas.Cadhalpun@googlemail.com>
Signed-off-by: Diego Biurrun <diego@biurrun.de>
Only do this when building for a recent VAAPI version - initial
driver implementations were confused about the interpretation of the
framerate field, but hopefully this will be consistent everywhere
once 0.40.0 is released.
Default to using VBR when a target bitrate is set, unless the max rate
is also set and matches the target. Changes to the Intel driver mean
that min_qp is also respected in this case, so set a codec default to
unset the value rather than using the current default inherited from
the MPEG-4 part 2 encoder.
This includes a backward-compatibility hack to choose CBR anyway on
old drivers which have no CBR support, so that existing programs will
continue to work their options now map to VBR.
The current condition can trigger in cases where it shouldn't, with
unexpected results.
Make sure that:
- container cropping is really based on the original dimensions from the
caller
- those dimenions are discarded on size change
The code is still quite hacky and eventually should be deprecated and
removed, with the decision about which cropping is used delegated to the
caller.
Introducing enforced sync points in arbitrary places is bad for
performance. Since the vast majority of receiving code (QSV VPP or
encoders, retrieving frames through hwcontext) will do the syncing, this
change should not be visible to most callers. But bumping micro just in
case.
This is also consistent with what VAAPI hwaccel does.
We can pick the correct slice index directly from the ID3D11VideoDecoderOutputView
casted from data[3].
Signed-off-by: Anton Khirnov <anton@khirnov.net>
No need to loop through the known surfaces, we'll use the requested surface
anyway.
The loop is only done for DXVA2.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Before this change, it was possible to overflow pic_order_cnt_lsb and
generate a stream with invalid POC numbering. This makes sure that
the field is large enough that a single IDR B* P sequence uses fewer
than half the available POC lsb values.
This change makes the configured GOP size be respected exactly -
previously the value could be exceeded slightly due to flaws in the
frame type selection logic.
In H.264 section 8.2.1, we have that "The bitstream shall not contain
data that result in Min(TopFieldOrderCnt, BottomFieldOrderCnt) not
equal to 0 for a coded IDR frame". This fixes the encoder to always
conform to this - previously the POC values formed an unbroken
sequence, not resetting to zero on IDR frames.
Signed-off-by: Mark Thompson <sw@jkqxz.net>