3

I'm following the tutorial by John Chapman (http://john-chapman-graphics.blogspot.nl/2013/01/ssao-tutorial.html) to implement SSAO in a deferred renderer. The input buffers to the SSAO shaders are:

  • World-space positions with linearized depth as w-component.
  • World-space normal vectors
  • Noise 4x4 texture

I'll first list the complete shader and then briefly walk through the steps:

#version 330 core
in VS_OUT {
    vec2 TexCoords;
} fs_in;

uniform sampler2D texPosDepth;
uniform sampler2D texNormalSpec;
uniform sampler2D texNoise;


uniform vec3 samples[64];

uniform mat4 projection;
uniform mat4 view;
uniform mat3 viewNormal; // transpose(inverse(mat3(view)))

const vec2 noiseScale = vec2(800.0f/4.0f, 600.0f/4.0f);
const float radius = 5.0;

void main( void )
{
    float linearDepth = texture(texPosDepth, fs_in.TexCoords).w;

    // Fragment's view space position and normal
    vec3 fragPos_World = texture(texPosDepth, fs_in.TexCoords).xyz;
    vec3 origin = vec3(view * vec4(fragPos_World, 1.0));
    vec3 normal = texture(texNormalSpec, fs_in.TexCoords).xyz;
    normal = normalize(normal * 2.0 - 1.0);
    normal = normalize(viewNormal * normal); // Normal from world to view-space
    // Use change-of-basis matrix to reorient sample kernel around origin's normal
    vec3 rvec = texture(texNoise, fs_in.TexCoords * noiseScale).xyz;
    vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
    vec3 bitangent = cross(normal, tangent);
    mat3 tbn = mat3(tangent, bitangent, normal);

    // Loop through the sample kernel
    float occlusion = 0.0;

    for(int i = 0; i < 64; ++i)
    {
        // get sample position
        vec3 sample = tbn * samples[i]; // From tangent to view-space
        sample = sample * radius + origin; 

        // project sample position (to sample texture) (to get position on screen/texture)
        vec4 offset = vec4(sample, 1.0);
        offset = projection * offset;
        offset.xy /= offset.w;
        offset.xy = offset.xy * 0.5 + 0.5;

        // get sample depth
        float sampleDepth = texture(texPosDepth, offset.xy).w;

        // range check & accumulate
        // float rangeCheck = abs(origin.z - sampleDepth) < radius ? 1.0 : 0.0;
        occlusion += (sampleDepth <= sample.z ? 1.0 : 0.0);           
    }
    occlusion = 1.0 - (occlusion / 64.0f);

    gl_FragColor = vec4(vec3(occlusion), 1.0);
}

The result is however not pleasing. The occlusion buffer is mostly all white and doesn't show any occlusion. However, if I move really close to an object I can see some weird noise-like results as you can see below:

weird SSAO visual results

This is obviously not correct. I've done a fair share of debugging and believe all the relevant variables are correctly passed around (they all visualize as colors). I do the calculations in view-space.

I'll briefly walk through the steps (and choices) I've taken in case any of you figure something goes wrong in one of the steps.

view-space positions/normals John Chapman retrieves the view-space position using a view ray and a linearized depth value. Since I use a deferred renderer that already has the world-space positions per fragment I simply take those and multiply them with the view matrix to get them to view-space.

I take a similar approach for the normal vectors. I take the world-space normal vectors from a buffer texture, transform them to [-1,1] range and multiply them with transpose(inverse(mat3(..))) of view matrix.

The view-space position and normals are visualized as below:

view-space and normals visualized SSAO

This looks correct to me.

Orient hemisphere around normal The steps to create the tbn matrix are the same as described in John Chapman's tutorial. I create the noise texture as follows:

std::vector<glm::vec3> ssaoNoise;
for (GLuint i = 0; i < noise_size; i++)
{
    glm::vec3 noise(randomFloats(generator) * 2.0 - 1.0, randomFloats(generator) * 2.0 - 1.0, 0.0f); 
    noise = glm::normalize(noise);
    ssaoNoise.push_back(noise);
}
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, 4, 4, 0, GL_RGB, GL_FLOAT, &ssaoNoise[0]);

I can visualize the noise in the fragment shader so that seems to work.

sample depths I transform all samples from tangent to view-space (samples are random between [-1,1] on xy axis and [0,1] on z-axis and translate them to fragment's current view-space position (origin).

I then sample from linearized depth buffer (which I visualize below when looking close to an object):

depth buffer linearized

and finally compare sampled depth values to current fragment's depth value and add occlusion values. Note that I do not perform a range-check since I don't believe that is the cause of this behavior and I'd rather keep it as minimal as possible for now.

I don't know what is causing this behavior. I believe it is somewhere in sampling the depth values. As far as I can tell I am working in the right coordinate system, linearized depth values are in view-space as well and all variables are set somewhat properly.

ABHAY
  • 221
  • 2
  • 12

0 Answers0