Filter graph reconfigurations can cause nontrivial issues, e.g. PTS
discontinuities, so it is better to warn the user about them.
Signed-off-by: Marton Balint <cus@passwd.hu>
Otherwise an unitialized stack value would be copied to FPSConvContext.
As it's then never used, it tends not to be a problem in practice,
however it is UB and some compilers warn about it.
In close_output(), a dummy frame is created with format NONE passed
to enc_open(), which isn't prepared for it. The NULL pointer
dereference happened at
av_pix_fmt_desc_get(enc_ctx->pix_fmt)->comp[0].depth.
When fgt.graph is NULL, skip fg_output_frame() since there is
nothing to output.
frame #0: 0x0000005555bc34a4 ffmpeg_g`enc_open(opaque=0xb400007efe2db690, frame=0xb400007efe2d9f70) at ffmpeg_enc.c:235:44
frame #1: 0x0000005555bef250 ffmpeg_g`enc_open(sch=0xb400007dde2d4090, enc=0xb400007e4e2daad0, frame=0xb400007efe2d9f70) at ffmpeg_sched.c:1462:11
frame #2: 0x0000005555bee094 ffmpeg_g`send_to_enc(sch=0xb400007dde2d4090, enc=0xb400007e4e2daad0, frame=0xb400007efe2d9f70) at ffmpeg_sched.c:1571:19
frame #3: 0x0000005555bee01c ffmpeg_g`sch_filter_send(sch=0xb400007dde2d4090, fg_idx=0, out_idx=0, frame=0xb400007efe2d9f70) at ffmpeg_sched.c:2154:12
frame #4: 0x0000005555bcf124 ffmpeg_g`close_output(ofp=0xb400007e4e2d85b0, fgt=0x0000007d1790eb08) at ffmpeg_filter.c:2225:15
frame #5: 0x0000005555bcb000 ffmpeg_g`fg_output_frame(ofp=0xb400007e4e2d85b0, fgt=0x0000007d1790eb08, frame=0x0000000000000000) at ffmpeg_filter.c:2317:16
frame #6: 0x0000005555bc7e48 ffmpeg_g`filter_thread(arg=0xb400007eae2ce7a0) at ffmpeg_filter.c:2836:15
frame #7: 0x0000005555bee568 ffmpeg_g`task_wrapper(arg=0xb400007d8e2db478) at ffmpeg_sched.c:2200:21
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Change the main loop and every component (demuxers, decoders, filters,
encoders, muxers) to use the previously added transcode scheduler. Every
instance of every such component was already running in a separate
thread, but now they can actually run in parallel.
Changes the results of ffmpeg-fix_sub_duration_heartbeat - tested by
JEEB to be more correct and deterministic.
See the comment block at the top of fftools/ffmpeg_sched.h for more
details on what this scheduler is for.
This commit adds the scheduling code itself, along with minimal
integration with the rest of the program:
* allocating and freeing the scheduler
* passing it throughout the call stack in order to register the
individual components (demuxers/decoders/filtergraphs/encoders/muxers)
with the scheduler
The scheduler is not actually used as of this commit, so it should not
result in any change in behavior. That will change in future commits.
* the code is made shorter and simpler
* avoids constantly allocating and freeing AVPackets, thanks to
ThreadQueue integration with ObjPool
* is consistent with decoding/filtering/muxing
* reduces the diff in the future switch to thread-aware scheduling
This makes ifile_get_packet() always block. Any potential issues caused
by this will be resolved by the switch to thread-aware scheduling in
future commits.
Otherwise they'd be silently ignored if received by the filtering thread
before the filtergraph can be initialized, which would make the output
dependent on the order in which frames from different inputs arrive.
As previously for decoding, this is merely "scaffolding" for moving to a
fully threaded architecture and does not yet make filtering truly
parallel - the main thread will currently wait for the filtering thread
to finish its work before continuing. That will change in future commits
after encoders are also moved to threads and a thread-aware scheduler is
added.
Avoid making decisions based on current graph input state, which makes
the output dependent on the order in which the frames from different
inputs are interleaved.
Makes the output of fate-filter-overlay-dvdsub-2397 more correct - the
subtitle appears two frames later, which is closer to its PTS as stored
in the file.
Its function is analogous to that of the fps filter, so filtering is a
more appropriate place for this.
The main practical reason for this move is that it places the encoding
sync queue right at the boundary between filters and encoders. This will
be important when switching to threaded scheduling, as the sync queue
involves multiple streams and will thus need to do nontrivial
inter-thread synchronization.
In addition to framerate conversion, the closely-related
* encoder timebase selection
* applying the start_time offset
are also moved to filtering.
ifilter_send_eof() will fail if the input has no real or fallback
parameters, so there is no need to handle the case of some inputs being
in EOF state yet having no parameters.
ffmpeg CLI pixel format selection for filtering currently special-cases
MJPEG encoding, where it will restrict the supported list of pixel
formats depending on the value of the -strict option. In order to get
that value it will apply it from the options dict into the encoder
context, which is a highly invasive action even now, and would become a
race once encoding is moved to its own thread.
The ugliness of this code can be much reduced by moving the special
handling of MJPEG into ofilter_bind_ost(), which is called from encoder
init and is thus synchronized with it. There is also no need to write
anything to the encoder context, we can evaluate the option into our
stack variable.
There is also no need to access AVCodec at all during pixel format
selection, as the pixel formats array is already stored in
OutputFilterPriv.