0

I'm working on OpenGL application using the QT5 Gui framework, However, I'm not an expert in OpenGL and I'm facing a couple of issues when trying to simulate directional light. I'm using 'almost' the same algorithm I used in an WebGL application which works just fine.

The application is used to render multiple adjacent cells of a large gridblock (each of which is represented by 8 independent vertices) meaning that some vertices of the whole gridblock are duplicated in the VBO. the normals are calculated per face in geometry shader as shown below in the code.

QOpenGLWidget paintGL() body.

void OpenGLWidget::paintGL()
{
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    glEnable(GL_DEPTH_TEST);
    glEnable(GL_CULL_FACE);        

    m_camera = camera.toMatrix(); 
    m_world.setToIdentity(); 

    m_program->bind();
    m_program->setUniformValue(m_projMatrixLoc, m_proj);
    m_program->setUniformValue(m_mvMatrixLoc, m_camera * m_world); 

    QMatrix3x3 normalMatrix = (m_camera * m_world).normalMatrix();
    m_program->setUniformValue(m_normalMatrixLoc, normalMatrix);

    QVector3D lightDirection = QVector3D(1,1,1);
    lightDirection.normalize();
    QVector3D directionalColor = QVector3D(1,1,1);
    QVector3D ambientLight = QVector3D(0.2,0.2,0.2);
    m_program->setUniformValue(m_lightDirectionLoc, lightDirection);
    m_program->setUniformValue(m_directionalColorLoc, directionalColor);
    m_program->setUniformValue(m_ambientColorLoc, ambientLight);

    geometries->drawGeometry(m_program);
    m_program->release();
    }
}

Vertex Shader

#version 330
layout(location = 0) in vec4 vertex; 
uniform mat4 projMatrix;
uniform mat4 mvMatrix;

void main()
{
  gl_Position = projMatrix * mvMatrix * vertex;
}

Geometry Shader

#version 330
layout ( triangles ) in;
layout ( triangle_strip, max_vertices = 3 ) out;
out vec3 transformedNormal;
uniform mat3 normalMatrix;

void main()
{
    vec3 A = gl_in[2].gl_Position.xyz - gl_in[0].gl_Position.xyz;
    vec3 B = gl_in[1].gl_Position.xyz - gl_in[0].gl_Position.xyz;

    gl_Position = gl_in[0].gl_Position;
    transformedNormal = normalMatrix * normalize(cross(A,B));
    EmitVertex();
    gl_Position = gl_in[1].gl_Position;
    transformedNormal = normalMatrix * normalize(cross(A,B));
    EmitVertex(); 
    gl_Position = gl_in[2].gl_Position;
    transformedNormal = normalMatrix * normalize(cross(A,B));
    EmitVertex();
    EndPrimitive();
}

Fragment Shader

#version 330
in vec3 transformedNormal;
out vec4 fColor;
uniform vec3 lightDirection;
uniform vec3 ambientColor;
uniform vec3 directionalColor;

void main()
{
        highp float directionalLightWeighting = max(dot(transformedNormal, lightDirection), 0.0);
        vec3 vLightWeighting = ambientColor + directionalColor * directionalLightWeighting;
        highp vec3 color = vec3(1, 1, 0.0);
        fColor = vec4(color*vLightWeighting, 1.0);
}

The 1st issue is that lighting on the faces seems to change whenever the camera angle changes (camera location doesn't affect it, only the angle). You can see this behavior in the following snapshot. My guess is that I'm doing something wrong when calculating the normal matrix, but I can't figure out what it is. enter image description here

The 2nd issue (The one causing me headaches) is whenever The camera is moved, edges of the cells show blocky and rigged lines that flickers when the camera moves around. this effect gets really nasty when there are too many cells clustered together.

enter image description here

The model used in the snapshot is just a sample slab of 10 cells to better illustrate the faulty effects. The actual models (gridblock) contain up to 200K cells stacked together.

EDIT: 2nd issue solution. I was using znear/zfar of 0.01f and 50000.0f respecticvely, when I changed the znear to 1.0f, this effect disappeared. According to OpenGL Wiki this is caused by a zNear clipping plane value that's too close to 0.0. As the zNear clipping plane is set increasingly closer to 0.0, the effective precision of the depth buffer decreases dramatically

EDIT2: I tried debug drawing the normals as suggested in the comments, I quickly realized that I probably shouldn't calculate them based on gl_Position (after MVP matrix multiplication in VS) instead I should use the original vertex locations, so i modified the the shaders as follows:

Vertex Shader (UPDATED)

#version 330
layout(location = 0) in vec4 vertex; 
out vec3 vert;
uniform mat4 projMatrix;
uniform mat4 mvMatrix;

void main()
{
  vert = vertex.xyz;
  gl_Position = projMatrix * mvMatrix * vertex;
}

Geometry Shader (UPDATED)

#version 330
layout ( triangles ) in;
layout ( triangle_strip, max_vertices = 3 ) out;
in vec3 vert [];
out vec3 transformedNormal;
uniform mat3 normalMatrix;

void main()
{
    vec3 A = vert[2].xyz - vert[0].xyz;
    vec3 B = vert[1].xyz - vert[0].xyz;

    gl_Position = gl_in[0].gl_Position;
    transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
    EmitVertex();
    gl_Position = gl_in[1].gl_Position;
    transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
    EmitVertex(); 
    gl_Position = gl_in[2].gl_Position;
    transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
    EmitVertex();
    EndPrimitive();
}

But even after this modification the normals of the surface still change with the camera angle, as shown below in the screenshot. I dont know if the normal calculation is wrong or the normal matrix calculation is done wrong or maybe both...

enter image description here

EDIT3: 1st Issue Solution: changing normal calculation in GS from transformedNormal = normalize(normalMatrix * normalize(cross(A,B))); to transformedNormal = normalize(cross(A,B)); seems to solve the problem. Omitting the normalMatrix from the calculation fixed the issue and the normals dont change with the viewing angle.

If I missed any important/relevant information, please notify me in a comment.

Mostafa Wasat
  • 181
  • 2
  • 10
  • 1
    The cell boundaries are holes or different lighting? I would try QUAD_STRIP or TRIANGLE_STRIP if it helps. What is the rendered scene size and your frustrums `z_near,z_far` and Depth buffer `bit_width`? it may be Depth buffer accuracy issue in that case lower the ratio `Z_far/z_near` You should add another pass and debug draw the computed normals by emitting them from geometry shader as lines ... to actually see if they are correctly computed. – Spektre Jan 16 '16 at 08:49
  • something like on last image here [Normal mapping gone horribly wrong](http://stackoverflow.com/a/28541305/2521214) – Spektre Jan 16 '16 at 09:08
  • @Spektre, Changing znear,zfar values solved the 2nd issue for me , I edited the original post but idk why this happened. I will try to draw the computed normals in the GS to see if they are computed correctly. I'm sorry but I didnt get what you meant by 'The cell boundaries are holes or different lighting? ' also I dont think I can access QUAD_STRIP in a geometry shader. – Mostafa Wasat Jan 16 '16 at 13:09
  • if there are holes then you can see through if there is just wrong lighting then it is not hole but different fill color instead. – Spektre Jan 16 '16 at 14:49
  • There are no holes/gaps between cells, although some cells may have slightly higher/lower edges than neighboring ones.. to put it simply, a corner shared between 2 cells have the same (x,z) values, the (y) however maybe(but not always) different. – Mostafa Wasat Jan 16 '16 at 15:12

1 Answers1

1
  1. Depth buffer precision

    Depth buffer is usually stored as 16 or 24 bit buffer. It is a HW implementation of float normalized to specific range. So you can see there is very few bits for mantissa/exponent in comparison to standard float.

    if I oversimplify things and assume integer values instead float then for 16 bit buffer you got 2^16 values. if you got znear=0.1 and zfar=50000.0 then you got only 65535 values on the full range. Now as the Depth valued are nonlinear you got higher accuracy near znear and much much lower near zfar plane so the depth values will jump with higher and higher step causing accuracy problems where any 2 polygons are near.

    I empirically got this for setting the planes in my views:

    • (zfar-znear)/desired_accuracy_step > 0.3*(2^n)

    Where n is the depth buffer bit-width and desired_accuracy_step is the wanted resolution in Z axis I need. Sometimes I saw it exchanged by znear value.

Spektre
  • 49,595
  • 11
  • 110
  • 380