2

I'm trying to implement Normal Mapping, using a simple cube that i created. I followed this tutorial https://learnopengl.com/Advanced-Lighting/Normal-Mapping but i can't really get how normal mapping should be done when drawing 3d objects, since the tutorial is using a 2d object.

In particular, my cube seems almost correctly lighted but there's something i think it's not working how it should be. I'm using a geometry shader that will output green vector normals and red vector tangents, to help me out. Here i post three screenshot of my work.

Directly lighted

directly lighted

Side lighted

side lighted

Here i actually tried calculating my normals and tangents in a different way. (quite wrong)

another trial

In the first image i calculate my cube normals and tangents one face at a time. This seems to work for the face, but if i rotate my cube i think the lighting on the adiacent face is wrong. As you can see in the second image, it's not totally absent.

In the third image, i tried summing all normals and tangents per vertex, as i think it should be done, but the result seems quite wrong, since there is too little lighting.

In the end, my question is how i should calculate normals and tangents. Should i consider per face calculations or sum vectors per vertex across all relative faces, or else?

EDIT --

I'm passing normal and tangent to the vertex shader and setting up my TBN matrix. But as you can see in the first image, drawing face by face my cube, it seems that those faces adjacent to the one i'm looking directly (that is well lighted) are not correctly lighted and i don't know why. I thought that i wasn't correctly calculating my 'per face' normal and tangent. I thought that calculating some normal and tangent that takes count of the object in general, could be the right way. If it's right to calculate normal and tangent as visible in the second image (green normal, red tangent) to set up the TBN matrix, why does the right face seems not well lighted?

EDIT 2 --

Vertex shader:


void main(){

   texture_coordinates = textcoord;
   fragment_position = vec3(model * vec4(position,1.0));

   mat3 normalMatrix = transpose(inverse(mat3(model)));
   vec3 T = normalize(normalMatrix * tangent);
   vec3 N = normalize(normalMatrix * normal);
   T = normalize(T - dot(T, N) * N);
   vec3 B = cross(N, T);
   mat3 TBN = transpose(mat3(T,B,N));
   view_position =  TBN * viewPos; // camera position
   light_position = TBN * lightPos; // light position
   fragment_position = TBN * fragment_position;
  
   gl_Position = projection * view * model * vec4(position,1.0);
}

In the VS i set up my TBN matrix and i transform all light, fragment and view vectors to tangent space; doing so i won't have to do any other calculation in the fragment shader.

Fragment shader:

 void main() {
    vec3 Normal = texture(TextSamplerNormals,texture_coordinates).rgb; // extract normal
    Normal = normalize(Normal * 2.0 - 1.0); // correct range
    material_color = texture2D(TextSampler,texture_coordinates.st); // diffuse map

    vec3 I_amb = AmbientLight.color * AmbientLight.intensity;
    vec3 lightDir = normalize(light_position - fragment_position);

    vec3 I_dif = vec3(0,0,0);
    float DiffusiveFactor = max(dot(lightDir,Normal),0.0);
    vec3 I_spe = vec3(0,0,0);
    float SpecularFactor = 0.0;

    if (DiffusiveFactor>0.0) {
       I_dif = DiffusiveLight.color * DiffusiveLight.intensity * DiffusiveFactor;

       vec3 vertex_to_eye = normalize(view_position - fragment_position); 
       vec3 light_reflect = reflect(-lightDir,Normal);
       light_reflect = normalize(light_reflect);

       SpecularFactor = pow(max(dot(vertex_to_eye,light_reflect),0.0),SpecularLight.power);
       if (SpecularFactor>0.0)  {
           I_spe = DiffusiveLight.color * SpecularLight.intensity * SpecularFactor;
       }
   }

   color = vec4(material_color.rgb * (I_amb + I_dif + I_spe),material_color.a);

}

enter image description here

Fra
  • 45
  • 5
  • Ok so this is less a theoretical question and more a problem specific to your implementation. Can you please provide your shaders and a screenshot of the normal map rendered as colors? i.e. whatever normal you end up calculating in your shader, make that the color of the images you are sharing. – Makogan Nov 04 '20 at 16:47
  • There's something i'm still missing. But yes, give me time to provide what you asked. – Fra Nov 04 '20 at 16:52
  • I suspect your formulas come straight from the learnopengl tutorial. I have never done it so I tend to calculate things in a different way which is simpler. Rather than converting your light parameters into tangent space pass the world normal, tnagent and bytangent of the model directly. Then on the fragment, your texture normal n converted to world space is `n.x * B + n.y * T + n.z * N` i.e the weighted sum of the model orthonormal basis where the weights are the coefficients of the texture normal. Try that and print the resulting normals as colors please. – Makogan Nov 04 '20 at 17:21
  • I added the screenshot. Looking pretty weird.. – Fra Nov 04 '20 at 17:32

1 Answers1

2

Handling discontinuity vs continuity

You are thinking about this the wrong way.

Depending on the use case your normal map may be continous or discontinous. For example in your cube, imagine if each face had a different surface type, then the normals would be different depending on which face you are currently in.

Which normal you use is determined by the texture itself and not by any blending in the fragment.

The actual algorithm is

  • Load rgb values of normal
  • Convert to -1 to 1 range
  • Rotate by the model matrix
  • Use new value in shading calculations

If you want continous normals, then you need to make sure that the charts in the texture space that you use obey that the limits of the texture coordinates agree.

Mathematically that means that if U and V are regions of R^2 that map to the normal field N of your Shape then if the function of the mapping is f it should be that:

If lim S(x_1, x_2) = lim S(y_1, y_2) where {x1,x2} \subset U and {y_1, y_2} \subset V then lim f(x_1, x_2) = lim f(y_1, y_2).

In plain English, if the cooridnates in your chart map to positions that are close in the shape, then the normals they map to should also be close in the normal space.

TL;DR do not belnd in the fragment. This is something that should be done by the normal map itself when its baked, not'by you when rendering.

Handling the tangent space

You have 2 options. Option n1, you pass the tangent T and the normal N to the shader. In which case the binormal B is T X N and the basis {T, N, B} gives you the true space where normals need to be expressed.

Assume that in tangent space, x is side, y is forward z is up. Your transformed normal becomes (xB, yT, zN).

If you do not pass the tangent, you must first create a random vector that is orthogonal to the normal, then use this as the tangent.

(Note N is the model normal, where (x,y,z) is the normal map normal)

Makogan
  • 8,208
  • 7
  • 44
  • 112
  • I'm sorry but i still don't understand what i should do. I'm not trying to create the normal map, but i do have to calculate tangent space. Are you saying that i should calculate simple per face normals and tangents and the normal map will do the rest of the work as i need just the tangent space? Because normal maps are usually in tangent space – Fra Nov 04 '20 at 16:20
  • My bad I miss understood your problem, I have added an edit, let me know if that is helpful. – Makogan Nov 04 '20 at 16:28
  • I realized that my question is expressed poorly, that's my fault. I'm aswell going to edit my question. Thank you for putting up your time – Fra Nov 04 '20 at 16:36