3

I am trying to use deferred shading to implement SSAO and I have problems to access my textures in the deferred fragment shader. The code is in C++/Qt5 and makes use of Coin3D to generate the rest of the UI (but this shouldn't really matter here).

The fragment shader of the deferred pass is:

#version 150 compatibility 

uniform sampler2D color;
uniform sampler2D position;
uniform sampler2D normal;

uniform vec3 dim;
uniform vec3 camPos;
uniform vec3 camDir;

void main()
{
    // screen position
    vec2 t = gl_TexCoord[0].st;

    // the color
    vec4 c = texture2D(color, t);

    gl_FragColor = c + vec4(1.0, t.x, t.y, 1.0);
}

The code for running the deferred pass is

_geometryBuffer.Unbind();

// push state
{
    glMatrixMode(GL_MODELVIEW);
    glPushMatrix();
    glLoadIdentity();

    glMatrixMode(GL_PROJECTION);
    glPushMatrix();
    glLoadIdentity();

    glPushAttrib(GL_DEPTH_BUFFER_BIT | 
                 GL_COLOR_BUFFER_BIT |
                 GL_LIGHTING_BIT | 
                 GL_SCISSOR_BIT | 
                 GL_POLYGON_BIT |
                 GL_CURRENT_BIT);
    glDisable(GL_DEPTH_TEST);
    glDisable(GL_ALPHA_TEST);
    glDisable(GL_LIGHTING);
    glDisable(GL_COLOR_MATERIAL);
    glDisable(GL_SCISSOR_TEST);
    glDisable(GL_CULL_FACE);
}

// bind shader
// /!\ IMPORTANT to do before specifying locations
_deferredShader->bind();

_CheckGLErrors("deferred");

// specify positions
_deferredShader->setUniformValue("camPos", ...);
_deferredShader->setUniformValue("camDir", ...);
_geometryBuffer.Bind(GBuffer::TEXTURE_TYPE_NORMAL, 2);
_deferredShader->setUniformValue("normal", GLint(2));
_geometryBuffer.Bind(GBuffer::TEXTURE_TYPE_POSITION, 1);
_deferredShader->setUniformValue("position",  GLint(1));
_geometryBuffer.Bind(GBuffer::TEXTURE_TYPE_DIFFUSE, 0);
_deferredShader->setUniformValue("color",  GLint(0));

_CheckGLErrors("bind");

// draw screen quad
{
    glBegin(GL_QUADS);
    glTexCoord2f(0, 0);
    glColor3f(0, 0, 0);
    glVertex2f(-1, -1);

    glTexCoord2f(1, 0);
    glColor3f(0, 0, 0);
    glVertex2f( 1, -1);

    glTexCoord2f(1, 1);
    glColor3f(0, 0, 0);
    glVertex2f( 1,  1);

    glTexCoord2f(0, 1);
    glColor3f(0, 0, 0);
    glVertex2f(-1,  1);
    glEnd();
}

_deferredShader->release();

// for debug
_geometryBuffer.Unbind(2);
_geometryBuffer.Unbind(1);
_geometryBuffer.Unbind(0);
_geometryBuffer.DeferredPassBegin();
_geometryBuffer.DeferredPassDebug();

// pop state
{
    glPopAttrib();

    glMatrixMode(GL_PROJECTION);
    glPopMatrix();

    glMatrixMode(GL_MODELVIEW);
    glPopMatrix();
}

I know that the textures have been correctly processed in the geometry buffer creation because I can dump them into files and get the expected result.

The deferred pass doesn't work. The shader compiled correctly and I get the following result on screen:

Bad result

And the last part of my code (DeferredPassBegin/Debug) is to draw the FBO to the screen (as shown in screenshot) as a proof that the GBuffer is correct.

The current result seems to mean that the textures are not correctly bound to their respective uniform, but I know that the content is valid as I dumped the textures to files and got the same results as shown above.

My binding functions in GBuffer are:

void GBuffer::Unbind()
{
    glBindFramebuffer(GL_FRAMEBUFFER, 0);
    glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
    glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
}

void GBuffer::Bind(TextureType type, uint32_t idx)
{
    glActiveTexture(GL_TEXTURE0 + idx);
    glEnable(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D, _textures[static_cast<uint32_t>(type)]);
}

void GBuffer::Unbind(uint32_t idx)
{
    glActiveTexture(GL_TEXTURE0 + idx);
    glDisable(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D, 0);
}

Finally, the textures are 512/512, and I created them in my GBuffer with:

WindowWidth = WindowHeight = 512;
// Create the FBO
glGenFramebuffers(1, &_fbo);
glBindFramebuffer(GL_FRAMEBUFFER, _fbo);

const uint32_t NUM = static_cast<uint32_t>(NUM_TEXTURES);

// Create the gbuffer textures
glGenTextures(NUM, _textures);
glGenTextures(1, &_depthTexture);

for (unsigned int i = 0 ; i < NUM; i++) {
   glBindTexture(GL_TEXTURE_2D, _textures[i]);
   glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, WindowWidth, WindowHeight, 0, GL_RGBA, GL_FLOAT, NULL);
   glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i + _firstIndex, GL_TEXTURE_2D, _textures[i], 0);
}

// depth
glBindTexture(GL_TEXTURE_2D, _depthTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, WindowWidth, WindowHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, _depthTexture, 0);

GLenum buffers[NUM];
for(uint32_t i = 0; i < NUM; ++i){
    buffers[i] = GLenum(GL_COLOR_ATTACHMENT0 + i + _firstIndex);
}
glDrawBuffers(NUM, buffers);

GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
    printf("FB error, status: 0x%x\n", status);
    return _valid = false;
}

// unbind textures
glBindTexture(GL_TEXTURE_2D, 0);

// restore default FBO
glBindFramebuffer(GL_FRAMEBUFFER, 0);

How can I debug farther at this stage? I know that the texture data is valid, but I can't seem to bind it to the shader correctly (but I have other shaders that use textures loaded from files and which work fine).

--- Edit 1 ---

As asked, the code for DeferredPassBegin/Debug (mostly coming from this tutorial )

void GBuffer::DeferredPassBegin() {
    glBindFramebuffer(GL_FRAMEBUFFER, 0);
    glBindFramebuffer(GL_READ_FRAMEBUFFER, _fbo); 
}

void GBuffer::DeferredPassDebug() {
    GLsizei HalfWidth = GLsizei(_texWidth / 2.0f);
    GLsizei HalfHeight = GLsizei(_texHeight / 2.0f);

    SetReadBuffer(TEXTURE_TYPE_POSITION);
    glBlitFramebuffer(0, 0, _texWidth, _texHeight,
                    0, 0, HalfWidth, HalfHeight, GL_COLOR_BUFFER_BIT, GL_LINEAR);

    SetReadBuffer(TEXTURE_TYPE_DIFFUSE);
    glBlitFramebuffer(0, 0, _texWidth, _texHeight,
                    0, HalfHeight, HalfWidth, _texHeight, GL_COLOR_BUFFER_BIT, GL_LINEAR);

    SetReadBuffer(TEXTURE_TYPE_NORMAL);
    glBlitFramebuffer(0, 0, _texWidth, _texHeight,
                    HalfWidth, HalfHeight, _texWidth, _texHeight, GL_COLOR_BUFFER_BIT, GL_LINEAR); 
 }
Alexandre Kaspar
  • 564
  • 6
  • 21
  • Just so you know, `glBindFramebuffer(GL_FRAMEBUFFER, 0);` is equivalent to the two calls that come after it in `GBuffer:Unbind()`. Anytime you use `GL_FRAMEBUFFER`, it applies to both the read and draw buffer target. Also, you do not enable or disable `GL_TEXTURE_2D` in the programmable pipeline, that's only for fixed-function shading. In a core profile, enabling that would actually create an invalid enum error. – Andon M. Coleman Feb 23 '15 at 01:06
  • Ok, I expected GL_FRAMEBUFFER included both READ and DRAW, but I was not sure (especially given that the constants don't seem related in value, GL_FRAMEBUFFER != GL_READ_FRAMEBUFFER | GL_WRITE_FRAMEBUFFER). – Alexandre Kaspar Feb 23 '15 at 01:34
  • Yeah, it's not a bitfield so that does not work. If it were, those names would end in `_BIT`. It is actually a little bit unusual for GL, but this behavior is documented in the manual page for `glBindFramebuffer (...)` - similar to `GL_FRONT_AND_BACK`. Can you show the code for `DeferredPassBegin` and `DeferredPassDebug`? I have a pretty good idea what those are _supposed_ to do, but they're not in your question. – Andon M. Coleman Feb 23 '15 at 01:39
  • Edited with the code, though this part should not interact with the one that doesn't work since the shader is disabled before that. It's here mostly to show the FBO texture content (I expect the white bunny from the color component to appear on the background, but right now I only get the texture coordinate gradient, so `color=0` in the fragment, but the texture data shows that `color!=0` when I dump it to file). – Alexandre Kaspar Feb 23 '15 at 02:14

1 Answers1

0

Arghk!!!

So I expected that texture parameters were not mandatory, but as I looked at some code, I just tried to specify my texture parameters. When generating the FBO textures, I use now

for (unsigned int i = 0 ; i < NUM; i++) {
    glBindTexture(GL_TEXTURE_2D, _textures[i]);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, WindowWidth, WindowHeight, 0, GL_RGBA, GL_FLOAT, NULL);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i + _firstIndex, GL_TEXTURE_2D, _textures[i], 0);
}

And with this change, I get the expected result (with only c in the fragment shader, and similar correct results if I switch to visualizing the normal / position).

Conclusion: one must specify the texture parameters for deferred shading to work (at least with the graphics setup of my application / machine).

Correct bunny

Alexandre Kaspar
  • 564
  • 6
  • 21
  • 1
    Ah, I believe I can shed some light on this. By default, the minification filter is `GL_NEAREST_MIPMAP_LINEAR`, which means that when you try to texture map this since it does not have mipmaps you have an ***incomplete*** texture. What you did in the code you posted is change the minification filter to remove mipmap filtering. You could have also called `glGenerateMipmap (GL_TEXTURE_2D)`, but avoiding mipmap filtering altogether is much more sensible. – Andon M. Coleman Feb 23 '15 at 03:00
  • Why would the default settings lead to an incomplete state? Isn't that a bad default choice? What is the reason behind it? If mimaps are expected to be used by default, then shouldn't they be on by default? – Alexandre Kaspar Feb 23 '15 at 14:30
  • I can honestly say I have no idea why that is the default. It leads to so many problems. Equally weird is that textures in OpenGL default to having 1000 LODs until you go in and change that (it should be set to exactly **1** for a non-mipmapped texture). But you learn to live with OpenGL's quirks. – Andon M. Coleman Feb 23 '15 at 15:56