The source frame may be cropped, so that its dimensions are smaller than
the pool dimensions. The transfer_data API requires the allocated size
of the destination frame to be the same as the pool size.
Be more careful when an input stream encounters EOF when its filtergraph
has not been configured yet. The current code would immediately mark the
corresponding output streams as finished, while there may still be
buffered frames waiting for frames to appear on other filtergraph
inputs.
This should fix the random FATE failures for complex filtergraph tests
after a3a0230a98
Previously we would allocate a new one for every frame. This instead
maintains an AVBufferPool of them to use as-needed.
Also makes the maximum size of an output buffer adapt to the frame
size - the fixed upper bound was a bit too easy to hit when encoding
large pictures at high quality.
This makes sure the actual stream parameters are used, which is
important mainly for hardware decoding+filtering cases, which would
previously require various weird workarounds to handle the fact that a
fake software graph has to be constructed, but never used.
This should also improve behaviour in rare cases where
avformat_find_stream_info() does not provide accurate information.
Currently, a filtergraph will pull in the output constraints from its
corresponding decoder context, which breaks proper layering. Instead,
explicitly send the constaints on the output parameters to the
filtergraph.
This is similar to what is done for filtergraph inputs in
30ab4c51a180610d9f1720c75518d763515c0d9f
Setting the filter input parameters is moved to init_input_stream(),
so that it is done before the decoder is opened, potentially overwriting
the information from avformat_find_stream_info() with less accurate
data.
This commit temporarily disables QSV transcoding with hw frames. The
functionality will be re-added in the following commits.
Currently, calling configure_filtergraph() will pull in the input
parameters from the corresponding decoder context. This has the
following disadvantages:
- the decoded frame is a more proper source for this information
- a filter accessing decoder data breaks proper layering
Add functions for explicitly sending the input stream parameters to a
filtergraph input - currently from a frame and a decoder. The decoder
one will be dropped in future commits after some more restructuring.
The encode function is supposed to just return 0 on success.
This stems from a mixup with the return value of decode functions.
Signed-off-by: Martin Storsjö <martin@martin.st>
No longer make a dummy device configuration to query. Instead, just
return everything we recognise from the whole format list. Also
change the device setup code to query that list only, rather than
intersecting it with the constraint output.
This makes hwupload more usable on mesa/gallium where the video
processor only declares support for RGB formats, making it unable to
deal with YUV formats before this patch. It might introduce some
different trickier failures in the internal upload/download code
because the set of allowed formats there has changed, though I didn't
find any obvious regressions with i965.
The functions may not clean up properly after using MMX
registers. For the normal testing calls, the checkasm_checked_call
functions will do the cleanup (and check that functions that
should clean up do it as well), but when benchmarking functions
that don't clean up, we don't currently properly clean up at all.
This causes issues if a benchmarked function is followed by testing
of a function that is supposed to not clobber the MMX/FPU state but
doesn't touch it at all.
Signed-off-by: Martin Storsjö <martin@martin.st>
Currently it's exported as AVFrame.pkt_pts, which is also the only use
for that field. The reason it is done like this is that lavc used to
export various codec-specific "timing" information in AVFrame.pts, which
is not done anymore.
Since it is confusing to the callers to have a separate field which is
used only for decoder timestamps and nothing else, deprecate pkt_pts and
use just AVFrame.pts everywhere.
The current code assumes that encoding_needed is simply an inverse of
stream_copy, which is not true for manually attached files (for which
neither of those is true).
We already have all the necessary information in open_output_file().
This makes the information about the stream/filtergraph mappings
available earlier.
This is a more appropriate place for this. H264Context.recovery_frame is
shared between frame threads, so modifying it where it is right now is
invalid.