Home > OS >  Wildly inconsistent and incorrect lighting in opengl
Wildly inconsistent and incorrect lighting in opengl

Time:12-14

I followed a tutorial to add simple diffuse lighting, but the lighting is very much broken:

Sample object

On top of being inconsistent, all the diffuse component completely disappears at some camera angles (camera position seems to have no effect on this)

The vertex shader:

#version 450 core

layout (location = 0) in vec4 vPosition;
layout (location = 1) in vec4 vNormal;
layout (location = 2) out vec4 fNormal;
layout (location = 3) out vec4 fPos;

uniform mat4 MVMatrix;
uniform mat4 PMatrix;

void main()
{
    gl_Position = PMatrix * (MVMatrix * vPosition);
    fNormal = normalize(inverse(transpose(MVMatrix))*vNormal);
    fPos = MVMatrix * vPosition;
}

Fragment shader:

#version 450 core

layout (location = 0) out vec4 fColor;
layout (location = 2) in vec4 fNormal;
layout (location = 3) in vec4 fPos;
uniform vec4 objColour;

void main()
{
    vec3 lightColour = vec3(0.5, 0.0, 0.8);
    vec3 lightPos = vec3(10, 20, 30);
    float ambientStrength = 0.4;
    vec3 ambient = ambientStrength * lightColour;

    vec3 diffLightDir = normalize(lightPos - vec3(fPos));
    float diff = max(dot(vec3(fNormal), diffLightDir), 0.0);
    vec3 diffuse = diff * lightColour;
    
    vec3 rgb = (ambient   diffuse) * objColour.rgb;
    fColor = vec4(rgb, objColour.a);
}

Normal calculation (Due to the pythonnic nature, I did not follow a tutorial and this is probably the issue)

self.vertices = np.array([], dtype=np.float32)
self.normals = np.array([], dtype=np.float32)
data = Wavefront(r"C:\Users\cwinm\AppData\Local\Programs\Python\Python311\holder.obj", collect_faces=True)
all_vertices = data.vertices
for mesh in data.mesh_list:
    for face in mesh.faces:
        face_vertices = np.array([all_vertices[face[i]] for i in range(3)])
        
        normal = np.cross(face_vertices[0]-face_vertices[1], face_vertices[2] - face_vertices[1])
        normal /= np.linalg.norm(normal)

        self.vertices = np.append(self.vertices, face_vertices)
        for i in range(3): self.normals = np.append(self.normals, normal)
self.index = index_getter(len(self.vertices))
self.vertices.resize((len(self.vertices)//3, 3))
self.vertices = np.array(self.vertices * [0.5, 0.5, 0.5], dtype=np.float32)

(The local vertices and normals are then appended to a global vertex and normal buffer, which is pushed to OpenGL after initialisation)

VBO creation (also probably a problem)

vPositionLoc = glGetAttribLocation(self.program, "vPosition")
vNormalLoc = glGetAttribLocation(self.program, "vNormal")

self.Buffers[self.PositionBuffer] = glGenBuffers(1)
glBindBuffer(GL_ARRAY_BUFFER, self.Buffers[self.PositionBuffer])
glBufferStorage(GL_ARRAY_BUFFER, self.vertices.nbytes, self.vertices, 0)
glVertexAttribPointer(vPositionLoc, 3, GL_FLOAT, False, 0, None)
glEnableVertexAttribArray(vPositionLoc)

self.Buffers[self.NormalBuffer] = glGenBuffers(1)
glBindBuffer(GL_ARRAY_BUFFER, self.Buffers[self.NormalBuffer])
glBufferStorage(GL_ARRAY_BUFFER, self.normals.nbytes, self.normals, 0)
glVertexAttribPointer(vNormalLoc, 3, GL_FLOAT, False, 0, None)
glEnableVertexAttribArray(vNormalLoc)

Ambient lighting, the matrices, and the vertex processing is all functional, things only broke when I added normals and (attempted) diffuse lighting

CodePudding user response:

vNormal is a vector with 3 components. You have to transform the normal vector with the normal matrix. The normal matrix is the inverse transpose of the upper left 3x3 components of the model view matrix (no translation):

vec3 normal = normalize(inverse(transpose(mat3(MVMatrix))) * vNormal.xyz);

This is the same as:

fNormal = normalize(inverse(transpose(MVMatrix)) * vec4(vNormal.xyz, 0.0));

If an attribute is not specified or only partially specified, the x, y, and z components default to 0.0, but the w component defaults to 1.0. So what you're actually doing is
fNormal = normalize(inverse(transpose(MVMatrix)) * vec4(vNormal.xyz, 1.0));, and that's wrong because you're applying the translation of the model view matrix to the normal vector

Also fNormal must be normalized in the fragment shader, because by the interpolation the length does not remain 1.0.

CodePudding user response:

Problem here is that normal here has type np.float64.

normal = np.cross(face_vertices[0]-face_vertices[1], face_vertices[2] - face_vertices[1])
normal /= np.linalg.norm(normal)

And self.normals in the following code are np.float64 but not np.float32 as you expect.

To fix it, you can do something like that:

                normal = np.cross(face_vertices[0]-face_vertices[1], face_vertices[2] - face_vertices[1])
                normal /= np.linalg.norm(normal)
                flt_normal = np.array(normal, dtype=np.float32)
 
                self.vertices = np.append(self.vertices, face_vertices)
                for i in range(3): self.normals = np.append(self.normals, flt_normal)

With regards to shaders, you indeed need to use mat3x3 for vec3 normal transformation. Additionally, transpose is equal to inverse for rotation matrices, so inverse(transpose(Rot)) = Rot. So it looks redundant, you can rewrite it as:

    fNormal.xyz = mat3x3(MVMatrix) * normalize(vNormal.xyz);

And don't forget to check normal direction in the fragment shader. It looks like it should be:

float diff = max(dot(-vec3(fNormal), diffLightDir), 0.0);
  • Related