0

I'm rendering a sphere with instanced drawing, while rotating the model-view-matrix around the Y axis.

It looks ok at the beginning:

enter image description here

But at another angle, things get worse:

enter image description here

It looks to me like a problem with normals. Currently, I'm calculating the normal-matrix from my model-view-matrix and then pass it to the shader, which is doing phong-like lighting:

attribute vec4 a_position;
attribute vec3 a_normal;
attribute vec4 a_color;
attribute vec2 a_coord;
attribute mat4 a_matrix;

uniform mat4 u_mv_matrix;
uniform mat4 u_projection_matrix;
uniform mat3 u_normal_matrix;

varying vec4 v_position;
varying vec3 v_normal;
varying vec4 v_color;
varying vec2 v_coord;

void main() {
  vec4 transformedPosition = u_mv_matrix * a_matrix * a_position;

  v_position = transformedPosition;
  v_normal = u_normal_matrix * a_normal;
  v_color = a_color;
  v_coord = a_coord;

  gl_Position = u_projection_matrix * transformedPosition;
}
uniform sampler2D u_sampler;

varying vec4 v_position;
varying vec3 v_normal;
varying vec4 v_color;
varying vec2 v_coord;

void main() {
  vec3 lightPosition = vec3(0.0); // XXX

  // set diffuse and specular colors
  vec3 cDiffuse = (v_color * texture2D(u_sampler, v_coord)).rgb;
  vec3 cSpecular = vec3(0.3);

  // lighting calculations
  vec3 N = normalize(v_normal);
  vec3 L = normalize(lightPosition - v_position.xyz);
  vec3 E = normalize(-v_position.xyz);
  vec3 H = normalize(L + E);

  // Calculate coefficients.
  float phong = max(dot(N, L), 0.0);

  const float kMaterialShininess = 20.0;
  const float kNormalization = (kMaterialShininess + 8.0) / (3.14159265 * 8.0);
  float blinn = pow(max(dot(N, H), 0.0), kMaterialShininess) * kNormalization;

  // diffuse coefficient
  vec3 diffuse = phong * cDiffuse;

  // specular coefficient
  vec3 specular = blinn * cSpecular;

  gl_FragColor = vec4(diffuse + specular, 1);
}

Final note: I'm working on desktop OpenGL 2.1 as well as WebGL on the browser.

Edit: Per request, I'm adding some information.

The mesh is built as follows, by passing an identity matrix:

void Sphere::append(IndexedVertexBatch<XYZ.N.UV> &batch, const Matrix &matrix) const {
  float sectorStep = TWO_PI / sectorCount;
  float stackStep = PI / stackCount;

  for(int i = 0; i <= stackCount; ++i) {
    float stackAngle = HALF_PI - i * stackStep;
    float xy = radius * cosf(stackAngle);
    float z = radius * sinf(stackAngle);

    for(int j = 0; j <= sectorCount; ++j) {
      float sectorAngle = j * sectorStep;

      float x = xy * cosf(sectorAngle);
      float y = xy * sinf(sectorAngle);

      float nx = x / radius;
      float ny = y / radius;
      float nz = z / radius;

      float s = (float)j / sectorCount;
      float t = (float)i / stackCount;

      batch.addVertex(matrix.transformPoint(x, y, z), matrix.transformNormal(nx, ny, nz), glm::vec2(s, t));
    }
  }

  for(int i = 0; i < stackCount; ++i) {
    float k1 = i * (sectorCount + 1);
    float k2 = k1 + sectorCount + 1;

    for(int j = 0; j < sectorCount; ++j, ++k1, ++k2) {
      if (i != 0) {
        if (frontFace == CCW) {
          batch.addIndices(k1, k1 + 1, k2);
        } else {
          batch.addIndices(k1, k2, k1 + 1);
        }
      }

      if (i != (stackCount - 1)) {
        if (frontFace == CCW) {
          batch.addIndices(k1 + 1, k2 + 1, k2);
        } else {
          batch.addIndices(k1 + 1, k2, k2 + 1);
        }
      }
    }
  }
}

Regarding the transformation matrices, it works as follow:

camera.getMVMatrix()
  .setIdentity()
  .translate(0, -150, -600)
  .rotateY(clock()->getTime() * 0.5f);

State()
  .setShader(shader)
  .setShaderMatrix<MV>(camera.getMVMatrix())
  .setShaderMatrix<PROJECTION>(camera.getProjectionMatrix())
  .setShaderMatrix<NORMAL>(camera.getNormalMatrix())
  .apply();

Finally, the light position is defined as vec3(0) in the fragment shader.

Note: As you can see, I'm using my own framework which provides among other things high level methods for building meshes and handling transformations. It's all straightforward stuff, proven to work as intended, but let me know if you need pointers to the source-code.

Update: The lighting part of the shader I used ended up being wrong, so I switched to another method.

But in essence, the solution I proposed in my answer is still valid (or at least it does the job of solving the "normal problem" when instancing is used, and non-uniform scaling is avoided.)

Here is a gist with the source-code. There is also an online WebGL demo.

Mark Rotteveel
  • 100,966
  • 191
  • 140
  • 197
Ariel Malka
  • 15,697
  • 6
  • 31
  • 33
  • 2
    A useful trick for debugging normals is to modify your fragment shader to set the fragment color from the normal vector. – janneb Nov 24 '21 at 12:13
  • @YakovGalka Ok, I have edited my post, with more details. – Ariel Malka Nov 24 '21 at 17:12
  • Too bad that this question has been closed. Anyway, I think I found a solution which takes in count non-uniform scaling: passing both a 4x4 model matrix and a 3x3 normal matrix to the shader for each instance. – Ariel Malka Dec 02 '21 at 15:58

1 Answers1

1

The solution was relatively simple: there is no point in passing a normal-matrix to the shader.

Instead, the normal needs to be computed in the vertex shader:

v_normal = vec3(u_mv_matrix * a_matrix * vec4(a_normal, 0.0));

Credits

Ariel Malka
  • 15,697
  • 6
  • 31
  • 33
  • 2
    "*there is no point in passing a normal-matrix to the shader.*" Yes, there is. If that matrix contains a non-uniform scale, then your computations will be wrong. – Nicol Bolas Nov 24 '21 at 16:28
  • 1
    `v_nomal = inverse(transpose(mat3(u_mv_matrix * a_matrixv) * a_normal)` . See [Why is the transposed inverse of the model view matrix used to transform the normal vectors?](https://computergraphics.stackexchange.com/questions/1502/why-is-the-transposed-inverse-of-the-model-view-matrix-used-to-transform-the-nor) and [Why transforming normals with the transpose of the inverse of the modelview matrix?](https://stackoverflow.com/questions/13654401/why-transforming-normals-with-the-transpose-of-the-inverse-of-the-modelview-matr) – Rabbid76 Nov 24 '21 at 16:30
  • @Rabbid76 Thanks, but "GLSL 110 does not allow sub- or super-matrix constructors" – Ariel Malka Nov 24 '21 at 16:35
  • @ArielMalka So you have to tweak it. You can also do it on the CPU and pass a normal matrix to the shader. – Rabbid76 Nov 24 '21 at 16:36
  • @ArielMalka I don' understand why this is the solution. If `u_normal_matrix` is wrong, there is an bug in your application that you need to fix. – Rabbid76 Nov 24 '21 at 16:40