Home > Blockchain >  What is the GLFW screen format for ffmpeg encoding?
What is the GLFW screen format for ffmpeg encoding?

Time:09-04

I have a program where I have a GLFW window and read that window using glReadPixels(0, 0,window_width, window_height, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid*) Buffer); then use that Buffer to encode a frame via ffmpeg. It works fine however the resolution is a bit low.

I tried changing bit-rates to no avail so I started experimenting with formats. Normally I'm using AV_PIX_FMT_YUV420P in my encoder as my format since the data that comes from the glfw window is RGBA but this format result in the said low resolution video. The result video is a lot more pixelated than whats seen on the GLFW window.

What could be the optimal pixel format for reading GLFW window? Some formats straight up not work resulting in Specified pixel format yuyv422 is invalid or not supported error.

And here is where I got that error from:

ret = avcodec_open2(c, codec, &opt);

This line causes me to get Specified pixel format yuyv422 is invalid or not supported error when I use something other than YUV420.

I also have this setting #define SCALE_FLAGS SWS_BICUBIC which im not sure what it does but may also cause my error. (Not likeliy since this comes after the line mentioned above, but I could just keep the format at YUV420 and change SWS_BICUBIC instead.)

I'm using ubuntu, ffmpeg, glfw and glad (GLSL) to render the texture writeen in the frames. I got the encoder from ffmpegs muxing.c example.

Edit: Using AV_PIX_FMT_RGBA also results in a similar error.

Edit: Here is my sample code where I convert Buffer to ffmepg format:

#define STREAM_PIX_FMT AV_PIX_FMT_YUV420P
...
void video_encoder::set_frame_yuv_from_rgb(AVFrame *frame, struct SwsContext *sws_context) {
    const int in_linesize[1] = { 4 * width };

    sws_context = sws_getContext(
            width, height, AV_PIX_FMT_RGBA,
            width, height, STREAM_PIX_FMT,
            SWS_BICUBIC, 0, 0, 0);

    sws_scale(sws_context, (const uint8_t * const *)&rgb_data, in_linesize, 0,
            height, frame->data, frame->linesize);
}

CodePudding user response:

Not every codec supports every pixel format. There are several methods to find out which pixel format to use.

AVCodec has a field named pix_fmts

array of supported pixel formats, or NULL if unknown, array is terminated by -1

Follow the links at the bottom called 'Referenced by ...' to get a better understanding on how to utilize this property.

When it comes to converting, finding the best pixel format is mandatory, have look at avcodec_find_best_pix_fmt_of_list

Find the best pixel format to convert to given a certain source pixel format.

And scaling is a complete different story (many filtering options), as can be seen here.

But somehow i doubt, that your pixelated output solely results from the rescale (pixel conversion). Sometimes postprocessing helps. Look at the variable pp_help on the available postprocessing filters.


Hint

Before programming with the ffmpeg api libraries, i recommend to store your OpenGl rendering into a sequence of high quality (lossless codec) images (e.g. bmp, tga, png and perhaps the easiest ppm, from the Netpbm package and of course SGI). Then try to experiment with the tool ffmpeg until you have the desired results. After that go and implement the used options into your program.


Some useful links:
(if you want to understand the meaning of the fields of AVCodecContext)

  • Related