Home > Mobile >  Calculating color histogram of framebuffer inside compute shader
Calculating color histogram of framebuffer inside compute shader

Time:11-30

As the title suggests, I am rendering a scene onto a framebuffer and I am trying to extract the color histogram from that framebuffer inside a compute shader. I am totally new to using compute shaders and the lack of tutorials/examples/keywords has overwhelmed me.

In particular, I am struggling to properly set up the input and output images of the compute shader. Here's what I have:

computeShaderProgram = loadComputeShader("test.computeshader");

int bin_size = 1;
int num_bins = 256 / bin_size;
tex_w = 1024;
tex_h = 768;

GLuint renderFBO, renderTexture;
GLuint tex_output;

//defining output image that will contain the histogram
glGenTextures(1, &tex_output);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex_output);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

glTexImage2D(GL_TEXTURE_2D, 0, GL_R16, num_bins, 3, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);
glBindImageTexture(0, tex_output, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_R16UI);


//defining the framebuffer the scene will be rendered on
glGenFramebuffers(1, &renderFBO);

glGenTextures(1, &renderTexture);
glBindTexture(GL_TEXTURE_2D, renderTexture);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, W_WIDTH, W_HEIGHT, 0, GL_RGBA, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);

glBindFramebuffer(GL_FRAMEBUFFER, renderFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, renderTexture, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);

In the main loop I draw a simple square onto the framebuffer and attempt to pass the framebuffer as input image to the compute shader:

glBindFramebuffer(GL_FRAMEBUFFER, renderFBO);
glDrawArrays(GL_TRIANGLES, 0, 6);

glUseProgram(computeShaderProgram);
//use as input the drawn framebuffer
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, renderFBO);
//use as output a pre-defined texture image
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, tex_output);
//run compute shader
glDispatchCompute((GLuint)tex_w, (GLuint)tex_h, 1);

GLuint *outBuffer = new GLuint[num_bins * 3];
glGetTexImage(GL_TEXTURE_2D, 0, GL_R16, GL_UNSIGNED_INT, outBuffer);

Finally, inside the compute shader I have:

#version 450
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform readonly image2D img_input;
layout(r16ui, binding = 1) uniform writeonly image2D img_output;

void main() {

    // grabbing pixel value from input image  
    vec4 pixel_color = imageLoad(img_input, ivec2(gl_GlobalInvocationID.xy));

    vec3 rgb = round(pixel_color.rgb * 255);

    ivec2 r = ivec2(rgb.r, 0);
    ivec2 g = ivec2(rgb.g, 1);
    ivec2 b = ivec2(rgb.b, 2);

    imageAtomicAdd(img_output, r, 1);
    imageAtomicAdd(img_output, g, 1);
    imageAtomicAdd(img_output, b, 1);
}

I defined the output as a 2d texture image of size N x 3 where N is the number of bins and the 3 accounts for the individual color components. Inside the shader I grab a pixel value from the input image, scale it into the 0-255 range and increment the appropriate location in the histogram.

I cannot verify that this works as intended because the compute shader produces compilation errors, namely:

  • can't apply layout(r16ui) to image type "image2D"
  • unable to find compatible overloaded function "imageAtomicAdd(struct image2D1x16_bindless, ivec2, int)"
  • EDIT: after changing to r32ui the previous error now becomes: qualified actual parameter #1 cannot be converted to less qualified parameter ("im")

How can I properly configure my compute shader? Is my process correct (at least in theory) and if not, why?

CodePudding user response:

As for your questions:

can't apply layout(r16ui) to image type "image2D"

r16ui can only be applied to unsigned image types, thus you should use uimage2D.

unable to find compatible overloaded function ...

The spec explicitly says that atomic operations can only by applied to 32-bit types (r32i, r32ui, or r32f). Thus you must use a 32-bit texture instead.


Your have other issues in your code too.

glBindTexture(GL_TEXTURE_2D, renderFBO);

You cannot bind an FBO to a texture. You should instead bind the texture that backs the FBO (renderTexture).

Also, you intend to bind a texture to an image uniform rather than a sampler, thus you must use glBindImageTexture or glBindImageTextures rather than glBindTexture. With the later you can bind both images in one call:

GLuint images[] = { renderTexture, tex_output };
glBindImageTextures(0, 2, images);

Your img_output uniform is marked as writeonly. However the atomic image functions expect an unqualified uniform. So remove the writeonly.


You can find all the above information in the OpenGL and GLSL specs, freely accessible from the OpenGL registry.

  • Related