2

I need to be able to modify vertex coordinates accordingly to a transformation matrix, but I have per-vertex lighting, so I am not sure, that my approach is correct for normals:

#version 120
uniform mat4 transformationMatrix;
void main() {
    vec3 normal, lightDir;
    vec4 diffuse, ambient, globalAmbient;
    float NdotL;
    // Transformation part
    normal = gl_NormalMatrix * gl_Normal * transpose(mat3(transformationMatrix));
    gl_Position = gl_ModelViewProjectionMatrix * transformationMatrix * gl_Vertex;
    // Calculate color
    lightDir = normalize(vec3(gl_LightSource[0].position));
    NdotL = max(abs(dot(normal, lightDir)), 0.0);
    diffuse = gl_Color * gl_LightSource[0].diffuse;
    ambient = gl_Color * gl_LightSource[0].ambient;
    globalAmbient = gl_LightModel.ambient * gl_Color;
    gl_FrontColor =  NdotL * diffuse + globalAmbient + ambient;
 } 

I perform all transformations in lines 8-9. Could You comment whether it is correct way or not?

zlon
  • 812
  • 8
  • 24

2 Answers2

3

If you want to create a normal matrix, then you have to use the inverse transpose of the upper left 3*3, of the 4*4 matrix.

See Why transforming normals with the transpose of the inverse of the modelview matrix?
and Why is the transposed inverse of the model view matrix used to transform the normal vectors?

This would mean that you have to write your code like this:

normal = gl_NormalMatrix * transpose(inverse(mat3(transformationMatrix))) * gl_Normal;


But, if a vector is multiplied to a matrix from the left, the result corresponds to to multiplying a column vector to the transposed matrix from the right.

See GLSL Programming/Vector and Matrix Operations

This means you can write the code like this and avoid the transpose operation:

normal = gl_NormalMatrix * (gl_Normal * inverse(mat3(transformationMatrix)));


If the 4*4 matrix transformationMatrix is a Orthogonal matrix, this means the X, Y, and Z axis are Orthonormal (unit vectors and they are normal to each other), then it is sufficent to use the the upper left 3*3. In this case the inverse matrix is equal the transposed matrix.

See In which cases is the inverse matrix equal to the transpose?

This will simplify your code:

normal = gl_NormalMatrix * mat3(transformationMatrix) * gl_Normal;

Of course this can also be expressed like this:

normal = gl_NormalMatrix * (gl_Normal * transpose(mat3(transformationMatrix)));

Note, this is not the same as you do in your code, becaues the * operations are processed from the left to the right (see GLSL - The OpenGL Shading Language 4.6, 5.1 Operators, page 97) and the result of

vec3 v;
mat3 m1, m2;

(m1 * v) * m2

is not equal

m1 * (v * m2);
Rabbid76
  • 202,892
  • 27
  • 131
  • 174
  • Thank You. My transformation matrix should not be orthogonal, so I should use Your second solution. I read provided links and I have last question: if my normals are normalized, it seems some transformations may destroy this normalization, so do I need normal = normalize ( gl_NormalMatrix * gl_Normal * inverse(mat3(transformationMatrix))); ? – zlon Feb 10 '18 at 09:00
  • @zlon If the matrix is not orthognoal, the the normalization of the normal vector is lost by the matrix transformation and you have to normalize it after the transformation. – Rabbid76 Feb 10 '18 at 10:32
1

The normal transformation does not look correct.

Since v * transpose(M) is exactly the same as M * v, you didn't do any special case handling for non-uniform scaling at all.

What you are looking for is most probably to use the inverse-transpose matrix:

normal = gl_NormalMatrix * transpose(inverse(mat3(transformationMatrix))) * gl_Normal;

For more details about the math behind this, have a look at this.

BDL
  • 21,052
  • 22
  • 49
  • 55