I am trying to implement video decoding application with libav decoder. Most libav examples are built like this (pseudocode):
while true {
auto packet = receive_packet_from_network();
avcodec_send_packet(packet);
auto frame = alloc_empty_frame();
int r = avcodec_receive_frame(&frame);
if (r==0) {
send_to_render(frame);
}
}
Above is pseudocode. Anyway, with this traditional cycle, when I wait receive frame complete and then wait rendering complete and then wait next data received from network incoming decoder buffer becomes empty. No HW decoder pipeline, low decode performance. Additional limitation in my application - I know exactly that one received packet from network directly corresponds to one decoded frame.
Besides that, I would like to make solution faster. For that I want to split this cycle into 2 different threads like this:
//thread one
while true {
auto packet = receive_packet_from_network();
avcodec_send_packet(packet);
}
//thread two
while true {
auto frame = alloc_empty_frame();
int r = avcodec_receive_frame(&frame);
if (r==0) {
send_to_render(frame);
}
Purpose to split cycle into 2 different threads is to keep incoming decoder buffer always feed enough, mostly full. Only in that case I guess HW decoder I expect to use will be happy to work constantly pipelined. Of cause, I need thread synchronization mechanisms, not shown here just for simplicity. Of cause when EGAIN is returned from avcodec_send_packet() or avcodec_receive_frame() I need to wait for other thread makes its job feeding incoming buffer or fetching ready frames. That is another story.
Besides that, this threading solution does not work for me with random segmentation faults. Unfortunately I cannot find any libav documentation saying explicitly if such method is acceptable or not, are avcodec_send_packet() and avcodec_receive_frame() calls thread safe or not?
So, what is best way to load HW decoder pipeline? For me it is obvious that traditional poll cycles shown in any libav examples are not effective.
CodePudding user response:
No, threading like this is not allowed in libavcodec.
But, FFmpeg and libavcodec do support threading and hardware pipelining. But, this is much lower-level and requires you, as the user, to let FFmpeg/libavcodec do its thing and not worry about it:
- don't call
send_packet()
andreceive_frame()
from different threads; - set
AVCodecContext.thread_count
for threading; - let hardware wrappers in FFmpeg internally take care of pipelining, they know much better than you what to do. I can ask experts for more info if you're interested, I'm not 100% knowledgeable in this area, but can refer you to people that are.
- if
send_packet()
returnsAVERROR(EAGAIN)
, callreceive_frame()
first - if
receive_frame()
returnsAVERROR(EAGAIN)
, please callsend_packet()
next.
With the correct thread_count
, FFmpeg/libavcodec will decode multiple frames in parallel and use multiple cores.