I have 2 functions, the first function renders my objects in the world, while the second function was supposed to render my objects directly in the view frame of the camera like a UI ie. if the camera moves, the object appear to be stationary as it moves with the camera. However, my second function doesn't seem to work as nothing appears, is my logic for the view projection matrix incorrect?
This is the function that sends the camera's view projection matrix to the vertex shader to render objects in the world, it works:
void Renderer2D::BeginScene(const OrthographicCamera& camera)
{
s_Data.shader = LightTextureShader;
(s_Data.shader)->Bind();
(s_Data.shader)->SetMat4("u_ViewProjection", camera.GetViewProjectionMatrix());
s_Data.CameraUniformBuffer->SetData(&s_Data.CameraBuffer, sizeof(Renderer2DData::CameraData));
s_Data.QuadVertexBuffer = LightQuadVertexBuffer;
s_Data.QuadVertexArray = LightQuadVertexArray;
s_Data.QuadIndexCount = LightQuadIndexCount;
s_Data.QuadVertexBufferBase = LightQuadVertexBufferBase;
StartBatch();
}
This is the function that was supposed to render my objects directly to the camera like a UI but it doesn't work:
void Renderer2D::BeginUIScene(const OrthographicCamera& camera)
{
s_Data.shader = TextureShader;
(s_Data.shader)->Bind();
Mat4 projection = getOrtho(0.0f, camera.GetWidth(), 0.0f, camera.GetHeight(), -1.0f, 1.f);
(s_Data.shader)->SetMat4("u_ViewProjection", projection);
s_Data.CameraUniformBuffer->SetData(&s_Data.CameraBuffer, sizeof(Renderer2DData::CameraData));
s_Data.QuadVertexBuffer = TexQuadVertexBuffer;
s_Data.QuadVertexArray = TexQuadVertexArray;
s_Data.QuadIndexCount = TexQuadIndexCount;
s_Data.QuadVertexBufferBase = TexQuadVertexBufferBase;
StartBatch();
}
Edit: The declaration for getOrtho():
Mat4 getOrtho(float left, float right, float bottom, float top, float zNear, float zFar);
CodePudding user response:
There are two approaches that I can think of. One is to directly pass the screen space coordinates to a vertex shader that does not apply a model, view or projection matrix to it. An example vertex shader would look like this:
#version ...
layout (location = 0) in vec3 aPos; // The vertex coords should be given in screen space
void main()
{
gl_Position = vec4(aPos, 1.0f);
}
This will render a 2D image to the screen at a fixed position which does not move as the camera moves.
The other way is if you want a 3D object that is "attached" to the camera (so it is on the screen in a fixed position), you need to only apply the model and projection matrix, and not the view. An example vertex shader that does this:
#version ...
layout (location = 0) in vec3 aPos;
uniform mat4 model;
uniform mat4 projection;
void main()
{
gl_Position = projection * model * vec4(aPos, 1.0f);
}
By not using the view matrix, the model will always be on the screen at whatever position the model matrix moves the model to. The center here is the camera so a model matrix that translates by vector vec3(0.1f, 0.0f, -0.2f)
would move the model 0.1f
to the right of the center of the camera, and 0.2f
away from the camera into the screen. Essentially, the model matrix here is defining the transformation of the model in relation to the camera's position. Note that if you want to do lighting calculations on the model then you will need to use the second method rather than the first, and you will need to do all of the lighting calculations in view/camera space for this model.
Edit:
To convert screen space coordinates from the range [0.0, screen resolution] to [-1.0, 1.0] which is the range that OpenGL uses:
float xResolution = 800.0f;
float yResolution = 600.0f;
float x = 200.0f;
float y = 400.0f;
float convertedX = ((x / xResolution) * 2) - 1;
float convertedY = ((y / yResolution) * 2) - 1;