I am using LUT files stored in .cube files for image post processing.
The LUT.cube file layout may look like this:
TITLE
LUT_3D_SIZE 2
1.000000 1.000000 1.000000 -> takes black pix. from input image and transfers to white output pix
1.000000 0.000000 0.000000
0.000000 1.000000 0.000000
0.000000 0.000000 1.000000
1.000000 1.000000 0.000000
1.000000 0.000000 1.000000
0.000000 1.000000 1.000000
0.000000 0.000000 0.000000 -> takes white pix. from input image and transfers to black output pix.
I load the raw data from the file to data vector (this part is 100% correct) and then load its data to GL_TEXTURE_3D
and use it calling glActiveTexture(GL_TEXTURE0_ARB unit);
as a sampler3D
in frag. shader utilizing GLSL texture(...)
function for free interpolation.
glGenTextures(1, &texID);
glBindTexture(GL_TEXTURE_3D, texID);
constexpr int MIPMAP_LEVEL = 0; // I do not need any mipmapping for now.
//glPixelStorei(GL_UNPACK_ALIGNMENT, 1); -> should I use this???
glTexImage3D(
GL_TEXTURE_3D,
MIPMAP_LEVEL,
GL_RGB,
size.x, // Size of the LUT 2, 33 etc., x == y == z.
size.y,
size.z,
0,
GL_RGB,
GL_FLOAT, // I assume this is correct regarding how are stored the data in the LUT .cube file.
data
);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAX_LEVEL, MIPMAP_LEVEL);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_3D, 0);
For simplycity lets say that input image is restricted to be RGB8 or RGB16 format.
This works OK, when LUT_3D_SIZE
is 2 (I suppose any POT will work), producing expectable constant color transformation output.
However these LUT files can have LUT_3D_SIZE
parameter of NPOT size - odd numbers like 33 typicaly and there is where I start to have problems and non-deterministic output (the output texture is varying every post process run - I assume this is because the non-aligned texture is filled with some random data).
How I should address this problem?
I guess I could use glPixelStorei(GL_PACK/UNPACK_ALIGNMENT, x);
to compensate odd pixel row width (33), but I would like to understand what (math) is going on instead of trying to pick up some random alignment that would magicaly work for me. Also I am not sure if this is the real problem I am facing, so...
For clarification I am on desktop GL 3.3 available, Nvidia card.
CodePudding user response:
So the long story short: the problem was that something somewhere in the rest of the code was setting the GL_UNPACK_ALIGNMENT
to 8 (find out using glGetIntegerv(GL_UNPACK_ALIGNMENT, &align)
) which was not possible to divide the actual row byte size to get the whole number so the texture was malformed.
As the actual row byte size is 33 * 33 * 12 (width * depth * sizeof(GL_FLOAT) * 3 (RGB)) = 13 068 which is dividible by 1, 2 AND also 4 (so all of these possibilities must be valid if used by image loader) it was revealed in the end that it works for all of these settings and I must missed something (or forget to recompile or whatever when I experienced problems with GL_UNPACK_ALIGNMENT setted to 1 as it is valid option and now I experience it working properly - same with values 2 and 4).