3

I finally got my GBuffer working (well not really) but now I have some strange issues with it and i can't find out why.

When I draw the Normal texture to screen, the normals are always showing to me (blue color always pointing to camera). I don't know how to explain it correctly, so here are some screens:

(I think this is the problem why my lighting pass is looking pretty strange)

Here is how I create the GBuffer:

glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo);

// generate texture object
glGenTextures(GBUFFER_NUM_TEXTURES, textures);
for (unsigned int i = 0; i < 4; i++)
{
    glBindTexture(GL_TEXTURE_2D, textures[i]);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, width, height, 0, GL_RGBA, GL_FLOAT, 0);
    glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, textures[i], 0);

    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
}

// generate depth texture object
glGenTextures(1, &depthStencilTexture);
glBindTexture(GL_TEXTURE_2D, depthStencilTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT,NULL);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthStencilTexture, 0);

// generate output texture object
glGenTextures(1, &outputTexture);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, width, height, 0, GL_RGB, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT4, GL_TEXTURE_2D, outputTexture, 0);

GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
assert(Status == GL_FRAMEBUFFER_COMPLETE);

glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);

Here the Geometry Pass:

glEnable(GL_DEPTH_TEST);
glDepthMask(true);
glCullFace(GL_BACK);

glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo);

GLenum DrawBuffers[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2, GL_COLOR_ATTACHMENT3};
glDrawBuffers(GBUFFER_NUM_TEXTURES, DrawBuffers);

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glLoadIdentity();
camera.Render();

geoProgram->Use();

GLfloat mat[16];

glPushMatrix();
glTranslatef(0,0,-20);
glRotatef(rot, 1.0, 0, 0);
glGetFloatv(GL_MODELVIEW_MATRIX,mat);
glUniformMatrix4fv(worldMatrixLocation, 1, GL_FALSE, mat);
glutSolidCube(5);
glPopMatrix();

glPushMatrix();
glTranslatef(0,0,0);
glGetFloatv(GL_MODELVIEW_MATRIX,mat);
glUniformMatrix4fv(worldMatrixLocation, 1, GL_FALSE, mat);
gluSphere(sphere, 3.0, 20, 20);
glPopMatrix();

glDepthMask(false);
glDisable(GL_DEPTH_TEST);

And here the Geometry pass shader:

[Vertex]
varying vec3 normal;
varying vec4 position;
uniform mat4 worldMatrix;

void main( void )
{       
    gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
    position = worldMatrix * gl_Vertex;
    normal = (worldMatrix * vec4(gl_Normal, 0.0)).xyz;
    gl_TexCoord[0]=gl_MultiTexCoord0;
}


[Fragment]
varying vec3 normal;
varying vec4 position;

void main( void )
{
    gl_FragData[0] = vec4(0.5, 0.5, 0.5, 1);//gl_Color;
    gl_FragData[1] = position;
    gl_FragData[2] = vec4(normalize(normal),0);
    gl_FragData[3] = vec4(gl_TexCoord[0].st, 0, 0);
}

Sorry for the long question / code fragment, but I don't know what to do next, I checked everything with other GBuffer implementations but couldn't find the error.

//Edit:

Ok, seems you are right, the problem is not the gbuffer, but the lighting pass. I have played around with it much but cant get it working :(

Here is the lighting pass:

[vs]
void main( void )
{
   gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}

[fs]
uniform sampler2D colorMap;
uniform sampler2D positionMap;
uniform sampler2D normalMap;
uniform sampler2D texcoordMap;

uniform vec2 screenSize;
uniform vec3 pointLightPostion;
uniform vec3 pointLightColor;
uniform float pointLightRadius;


void main( void )
{
    float lightDiffuse = 0.5;
    float lightSpecular = 0.7;
    vec3 lightAttenuation = vec3(1.4, 0.045, 0.00075);

    vec2 TexCoord = gl_FragCoord.xy / screenSize.xy;
    vec3 WorldPos = texture2D(positionMap, TexCoord).xyz;
    vec3 Color = texture(colorMap, TexCoord).xyz;
    vec3 normal = texture(normalMap, TexCoord).xyz;
    normal = normalize(normal);

    vec3 lightVector = WorldPos - pointLightPostion;
    float dist = length(lightVector);
    lightVector = normalize(lightVector);

    float nDotL = max(dot(normal, lightVector), 0.0);
    vec3 halfVector = normalize(lightVector - WorldPos);
    float nDotHV = max(dot(normal, halfVector), 0.0);

    vec3 lightColor = pointLightColor;
    vec3 diffuse =  lightDiffuse * nDotL;
    vec3 specular = lightSpecular * pow(nDotHV, 1.0) * nDotL;
    lightColor += diffuse + specular;

    float attenuation = clamp(1.0 / (lightAttenuation.x + lightAttenuation.y * dist + lightAttenuation.z * dist * dist), 0.0, 1.0);


    gl_FragColor = vec4(vec3(Color * lightColor * attenuation), 1.0);
}

If necessary I could also post the sencil pass and/or all the other processing.

Community
  • 1
  • 1
C0dR
  • 320
  • 1
  • 6
  • 23
  • top right is texture positions and the bottom left is albedo. Currently I have set texture positions only for the sphere and albedo is just solid colored – C0dR Sep 13 '13 at 06:45
  • Okay, everything looks normal to me then. What's the actual problem? By the way, if you want to reduce the number of buffers you have to render to (saving memory and increasing frame rate), you should write the albedo coefficient to the alpha channel of the normal buffer. And don't bother storing the normal/albedo in FP16, RGBA8 should be adequate for everything except for position. – Andon M. Coleman Sep 13 '13 at 06:59
  • 2
    The normals are always pointing at you because they're in view space, if that was your question? You can always get them in world space if you don't bother multiplying them by the "worldMatrix". But then you'll need to bias and scale them because some of the normals will point in negative directions. – Andon M. Coleman Sep 13 '13 at 07:01
  • Even with the pictures I still haven't got what problem you actually have (let aside what the right images actually encode). The normal image looks pretty reasonable to me. – Christian Rau Sep 13 '13 at 09:18
  • Probably "the problem why my lighting pass is looking pretty strange" is in the lighting pass, cause your G pass seems fine to everybody. Be careful with the spaces you are using. You use the same matrix for both position and normal, so they both are in view space (which is good). So, in your lighting pass everything must be in view space (the positions of the lights, for example). – darius Sep 13 '13 at 09:28
  • Ok so as i got so far, either put the lights in view space or put the normals in world space? Well I tried it without multiplying it by worldMatrix but that didn't help. – C0dR Sep 13 '13 at 13:22
  • Didn't help with what? Please show us your output and the shader you're using to compute lighting. Otherwise this is just a wild goose chase :) – Andon M. Coleman Sep 14 '13 at 03:31
  • Edited the post. Hope you can help me x.x – C0dR Sep 14 '13 at 06:13
  • @C0dR `WorldPos`, `normal`, and `PointLightPosition` must be all in the same space, which from you G PASS is the view space. You should post a screenshot of your problem, not just the shader code. We don't know what you mean when you say that your L PASS looks "strange". Also, try with `lightColor *= diffuse + specular`. – darius Sep 14 '13 at 08:28
  • @darius ok so I modified the vertexShader like this: `varying vec4 viewSpace; ... viewSpace = gl_ModelViewMatrixInverse * gl_Vertex; ...` and multiplied `WorldPos`, `normal` and `PointLightPosition` with `viewSpace`. But now I just get a black screen. – C0dR Sep 14 '13 at 16:50
  • @C0dR Inverse? Why inverse? – darius Sep 14 '13 at 16:56
  • @darius hat that in mind, dont know why. So just gl_ModelViewMatrix * gl_Vertex? That also just gives a black screen. – C0dR Sep 14 '13 at 17:12
  • @C0dR I dont' know, it's a bit tricky to understand from this little. Seeing your old screenshots you were already having at least the normals in the view space. But again, if you don't post a screenshot of your problem, it is really a shot in the dark from here. – darius Sep 14 '13 at 18:19
  • ok, i uploaded my whole solution http://www.xup.in/dl,18404592/deferredRendering.rar/ hope you can help me. – C0dR Sep 14 '13 at 18:57
  • Please also note that the first parameter to glFramebufferTexture2D must be GL_FRAMEBUFFER - see https://www.khronos.org/opengles/sdk/docs/man/xhtml/glFramebufferTexture2D.xml – Martin Gerhardy Jun 13 '16 at 08:43

2 Answers2

5

I know this got a bit old but i want you to know after much research i got everything working! I just wanted to post this here, if anyone got a similar / the same problem.

Actually the position map was totally wrong (well my lighting model, too, but thats another story :) ). As you already said, i was working with the fixed pipeline functions (this was really stupid). The actual fail was that i was getting the "modelmatrix" with

glGetFloatv(GL_MODELVIEW_MATRIX,mat);
glUniformMatrix4fv(worldMatrixLocation, 1, GL_FALSE, mat);

which actually is the modelmatrix * the viewmatrix in fixed pipeline. So I changed everything to custom model calculations and voila, the position map was correct. ( well you could also use the fixed function pipeline and setup a custom modelmatrix with matching values and input that into the shader as modelmatrix but thats just dirty). Thanks to you i got on the right way and managed to understand matrix calculation and shaders etc MUCH MUCH more!

Btw: the gbuffer now looks like this (the textureposition actually is needless :) ): enter image description here

C0dR
  • 320
  • 1
  • 6
  • 23
  • Well done ;). Texture position is indeed needless. The next step in learning deferred techniques is not to store the position but to get it back from the depth. It saves bandwidth you can use get more frames or more info for your materials. – darius Nov 04 '13 at 13:38
  • Did that now. Calculating view space position from depth now and rewrote lighting model so it uses view space instead of world space. – C0dR Jan 27 '14 at 20:53
2

Looking at your solution, from the link you posted in the comments, the first thing that strikes my eye is: in the lighting pass vertex shader, you do

viewSpace = gl_ModelViewMatrix * gl_Vertex;

Then in the fragment shader you do

vec3 lightPos = pointLightPostion * viewSpace;
vec3 WorldPos = texture2D(positionMap, TexCoord) * viewSpace;
vec3 normal = texture2D(normalMap, TexCoord) * viewSpace;

This has no sense at all.
Now, you are doing deferred shading, which is a technique typical of modern openGL (3.2+), that doesn't perform good an ancient hardware, and I think you used stuff form this tutorial, which is also modern openGL, so why do you use glPushMatrix and that kind of old stuff? Too bad, I've never learned old openGL, so I'm not always sure that I understand correctly your code.
By the way, back to the geometry pass. In the vertex shader, you do

position = gl_ModelViewMatrix * gl_Vertex;
normal = (modelMatrix * vec4(gl_Normal,0.0)).xyz;

but then you have position in view space and normal in model space. (if the modelMatrix you pass to the shader is really the model matrix, because from your screenshot the normals seem to be in view space). Also, be careful, if the normals are not in view space, but in model space, you'll have to bias and scale them, normal = 0.5f*(modelMatrix * vec4(gl_Normal,0.0)).xyz +1.0f;. I'd just go for

position = gl_ModelViewMatrix * gl_Vertex;
normal = (gl_ModelViewMatrix * vec4(gl_Normal,0.0)).xyz;

Remember, the important thing is that you have both position and normal in the same space. You can use either world space or view space, but then stick to your choice. In the lighting pass, just do

vec3 WorldPos = texture2D(positionMap, TexCoord).rgb;
vec3 normal = texture2D(normalMap, TexCoord).rgb;
vec3 lightVector = WorldPos - pointLightPostion;

and be sure that pointLightPostion is in the same space you decided, by transforming it in your application, on the CPU side, and then passing it to openGL, already transformed.

Also, I don't understand why you do

lightColor += diffuse + specular;

isntead of

lightColor *= diffuse + specular;

That way you'll have an emissive component in your lighting with the color of your light and the diffuse and specular without it. It doesn't seem a nice choice, especially in deferred shading, where you can easily perform an ambient pass on the whole frame.

Hope I helped. Too bad I don't use glut and I can't build your code.

EDIT
To transform pointLightPostion (which I assume is in the world space already) to the view space, just do

pointLightPostion = (ViewMatrix * glm::vec4(pointLightPostion,1.0f)).xyz;
darius
  • 837
  • 9
  • 22
  • Ok so if I do as you recommended, I have the normals and positions in view space. So how do I get the `pointLightPosition` to view space? – C0dR Sep 16 '13 at 04:01
  • 1
    @C0dR Now, that's pretty basic: you should learn [this stuff](http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/). You totally need it for openGL. Gonna edit my answer with a snippet for your specific case. – darius Sep 16 '13 at 07:53
  • Ok thank you very much. I will look at the link you sent, because it still doesn't work. I guess its still too early for me for Deferred Rendering :( – C0dR Sep 16 '13 at 21:12
  • But i found out how to set the normals in worldspace. Well, i was able to understand much more than before, thanks to you. anyways I cant get that damn point lights working... – C0dR Sep 16 '13 at 21:28
  • 1
    @C0dR Yeah, deferred rendering can be tricky. I'm sorry I can't help you more right now. But first thing first, I suggest you to take your time and learn how to avoid the deprecated stuff (glPushMAtrix and so on), and strengthen your math. Work through some tutorials: it won't take too much time and it will be totally worth it. – darius Sep 17 '13 at 08:19