5

I am trying to replicate the Sascha Willems SSAO example while using the LearnOpenGL SSAO tutorial as a resource. But my SSAO code is only partially covering models at certain angles/distances, and there is also a very strong self-occlusion effect when close to an object.

On the left is my renderer, and on the right side is the Sascha Willems SSAO Example:

EDIT: There is some strange artifacting on the Correct images from RenderDoc. Sorry about that.

Some notes about my renderer variables:

  • Position+Depth image is using VK_FORMAT_R32G32B32A32_SFLOAT format and looks correct in RenderDoc. [1] [2]
  • Normal image is using VK_FORMAT_R8G8B8A8_UNORM format and looks correct in RenderDoc. [1]
  • Position+Depth and Normal images are using a VkSampler with VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE.
  • SSAO image is VK_FORMAT_R8_UNORM and is being written correctly by the shader. [1]
  • SSAO Noise image is using VK_FORMAT_R32G32B32A32_SFLOAT format and looks correct in RenderDoc. [1]
  • SSAO Noise image is using a VkSampler with VK_SAMPLER_ADDRESS_MODE_REPEAT.

SSAO Noise

// Random Generator
std::default_random_engine rndEngine(static_cast<unsigned>(glfwGetTime()));
std::uniform_real_distribution<float> rndDist(0.0f, 1.0f);

// SSAO random noise
std::vector<glm::vec4> ssaoNoise(SSAO_NOISE_DIM * SSAO_NOISE_DIM);
for (uint32_t i = 0; i < static_cast<uint32_t>(ssaoNoise.size()); i++)
{
    ssaoNoise[i] = glm::vec4(rndDist(rndEngine) * 2.0f - 1.0f, rndDist(rndEngine) * 2.0f - 1.0f, 0.0f, 0.0f);
}

SSAO Kernels

// Function for SSAOKernel generation
float lerp(float a, float b, float f)
{
    return a + f * (b - a);
}

// SSAO sample kernel
std::vector<glm::vec4> ssaoKernel(SSAO_KERNEL_SIZE);
for (uint32_t i = 0; i < SSAO_KERNEL_SIZE; i++)
{
    glm::vec3 sample(rndDist(rndEngine) * 2.0 - 1.0, rndDist(rndEngine) * 2.0 - 1.0, rndDist(rndEngine));
    sample = glm::normalize(sample);
    sample *= rndDist(rndEngine);
    float scale = float(i) / float(SSAO_KERNEL_SIZE);
    scale = lerp(0.1f, 1.0f, scale * scale);
    ssaoKernel[i] = glm::vec4(sample * scale, 0.0f);
}

SSAO Kernel XY values are between -1.0 and 1.0, and Z values are between 0.0 and 1.0:

ssaoKernel XYZ[0]: X: -0.0428458 Y: 0.0578492 Z: 0.0569087
ssaoKernel XYZ[1]: X: 0.0191572 Y: 0.0442375 Z: 0.00108795
ssaoKernel XYZ[2]: X: 0.00155709 Y: 0.0287552 Z: 0.024916
ssaoKernel XYZ[3]: X: -0.0169349 Y: -0.0298343 Z: 0.0272303
ssaoKernel XYZ[4]: X: 0.0469432 Y: 0.0348599 Z: 0.0573885
(...)
ssaoKernel XYZ[31]: X: -0.104106 Y: -0.434528 Z: 0.321963

GLSL shaders

model.vert

mat3 normalMatrix = transpose(inverse(mat3(ubo.view * ubo.model)));
outNormalViewSpace = normalMatrix * inNormal;
outPositionViewSpace = vec3(ubo.view * ubo.model * vec4(inPosition, 1.0));

model.frag

// These are identical to the camera
float near = 0.1; 
float far  = 100.0; 
  
float LinearizeDepth(float depth) 
{
    float z = depth * 2.0 - 1.0;
    return (2.0 * near * far) / (far + near - z * (far - near));    
}

(...)

outNormalViewSpace = vec4(normalize(inNormalViewSpace) * 0.5 + 0.5, 1.0);
outPositionDepth = vec4(inPositionViewSpace, LinearizeDepth(gl_FragCoord.z));

fullscreen.vert

outUV = vec2((gl_VertexIndex << 1) & 2, gl_VertexIndex & 2);
gl_Position = vec4(outUV * 2.0f - 1.0f, 0.0f, 1.0f);

ssao.frag

#version 450

layout (location = 0) in vec2 inUV;

layout (constant_id = 1) const int SSAO_KERNEL_SIZE = 32;
layout (constant_id = 2) const float SSAO_RADIUS = 0.5;

layout (binding = 0) uniform sampler2D samplerPositionDepth;
layout (binding = 1) uniform sampler2D samplerNormal;
layout (binding = 2) uniform sampler2D samplerSSAONoise;

layout (binding = 3) uniform SSAOKernel
{
    vec4 samples[SSAO_KERNEL_SIZE];
} ssaoKernel;

layout( push_constant ) uniform UniformBufferObject {
    mat4 projection;
} ubo;

layout (location = 0) out float outSSAO;

void main() 
{
    //
    // SSAO Post Processing (Pre-Blur)
    //

    // Get a random vector using a noise lookup
    ivec2 texDim = textureSize(samplerPositionDepth, 0); 
    ivec2 noiseDim = textureSize(samplerSSAONoise, 0);
    const vec2 noiseUV = vec2(float(texDim.x) / float(noiseDim.x), float(texDim.y) / (noiseDim.y)) * inUV;   
    vec3 randomVec = texture(samplerSSAONoise, noiseUV).xyz * 2.0 - 1.0;

    // Get G-Buffer values
    vec3 fragPos = texture(samplerPositionDepth, inUV).rgb;
    vec3 normal = normalize(texture(samplerNormal, inUV).rgb * 2.0 - 1.0);

    // Create TBN matrix
    vec3 tangent = normalize(randomVec - normal * dot(randomVec, normal));
    vec3 bitangent = cross(tangent, normal);
    mat3 TBN = mat3(tangent, bitangent, normal);

    // Calculate occlusion value
    float occlusion = 0.0f;
    for(int i = 0; i < SSAO_KERNEL_SIZE; i++)
    {       
        vec3 samplePos = TBN * ssaoKernel.samples[i].xyz;
        samplePos = fragPos + samplePos * SSAO_RADIUS; 
        
        // project
        vec4 offset = vec4(samplePos, 1.0f);
        offset = ubo.projection * offset; 
        offset.xyz /= offset.w; 
        offset.xyz = offset.xyz * 0.5f + 0.5f;  
        
        float sampleDepth = -texture(samplerPositionDepth, offset.xy).w;

        // Range check
        float rangeCheck = smoothstep(0.0f, 1.0f, SSAO_RADIUS / abs(fragPos.z - sampleDepth));
        occlusion += (sampleDepth >= samplePos.z ? 1.0f : 0.0f) * rangeCheck;  
    }
    occlusion = 1.0 - (occlusion / float(SSAO_KERNEL_SIZE));
    
    outSSAO = occlusion;
}

There has to be a wrong setting or improper calculation somewhere, but I can't quite put my finger on it. Feel free to request additional code snippets if something pertinent is missing.

Any help is greatly appreciated, thank you!

Community
  • 1
  • 1
Axiom
  • 103
  • 7
  • 3
    I suspect there is a problem in depth usage. `LinearizeDepth` doesn't look right. Check [this question](https://stackoverflow.com/questions/51108596/linearize-depth). – Ramil Kudashev Apr 05 '19 at 09:07
  • 1
    I'd suggest calculating linear depth from inverse projection matrix instead. – JustSomeGuy Apr 05 '19 at 09:13
  • 1
    Also, storing position in GBuffer is waste of bandwith and space, you can calculate it using screen space position, gbuffer size, inverse projection matrix and depth from depth buffer.It would look something like `float2 xyNDC = FragCoordToNDC(positionSS, attachmentSize); float4 intermediatePosition = camera.inverseProjection * float4(xyNDC.x, xyNDC.y, depth, 1); float3 positionVS = intermediatePosition.xyz / intermediatePosition.w;` where `positionSS` is screen-space position, `depth` is depth sample, `positionVS` - position in view-space. – JustSomeGuy Apr 05 '19 at 09:17
  • 1
    @mlkn You were right, thank you! Posted an answer to the original question. – Axiom Apr 05 '19 at 10:25
  • 1
    @EgorShkorov Yes you are correct, it sounds like using the Depth Buffer to reconstruct Position is much more efficient and uses significantly less memory. However, I wanted to get SSAO working with the Position output before attempting it. I will give your function a try, thanks for the tip! – Axiom Apr 05 '19 at 10:28
  • 1
    Just check Vulkan specification to have an idea how to convert screen-space to NDC – JustSomeGuy Apr 05 '19 at 10:34

1 Answers1

3

Credit goes to mlkn for pointing out in the comments that the LinearizeDepth function did not look right. He was correct, there was an extra unnecessary "* 2.0 - 1.0" step that did not belong. Thank you mlkn! :)

This was the original, incorrect LinearizeDepth function:

float LinearizeDepth(float depth) 
{
    float z = depth * 2.0 - 1.0;
    return (2.0 * near * far) / (far + near - z * (far - near));    
}

By removing the first line, and changing it to this:

float LinearizeDepth(float depth) 
{
    return (2.0 * near * far) / (far + near - depth * (far - near));    
}

My output immediately changed to this, which appears to be correct: Correct SSAO

Axiom
  • 103
  • 7