4

I am trying to implement a deferred screenspace decal system using OpenGL following an article called "Drawing Stuff On Other Stuff With Deferred Screenspace Decals", link: http://martindevans.me/game-development/2015/02/27/Drawing-Stuff-On-Other-Stuff-With-Deferred-Screenspace-Decals/.

A redshaded cube is drawn ontop of the scene that conforms a wall with depthmask set to false. Image link(Cube without bounds): https://gyazo.com/8487947bd4afb08d8d0431551057ad6f

The depthbuffer from the wall together with some vertexshader outputs are used to calculate the objectspace-position of the wall within the dimensions of the cube.The bound checks makes sure that every pixle of the cube that is outside the objectspace-position of the wall is discarded.

The problem is that the bounds are not working as they should, the cube disappear completely.

Potential faults

I have checked if the depthbuffer is working correctly by visualizing it in the lighitingpass, and it seems to be working fine. The depthbuffer is stored in a color attachment in the gbuffer with a floatsize of GL_RGB32F. Image link(Ligtingpass depthbuffer visualization of faraway wall): https://gyazo.com/69920a532ca27aa9f57478cb57e0c84c

Decal Shader Code

VertexShader

// Vertex positions    
vec4 InputPosition = vec4(aPos, 1);   

// Viewspace Position    
PositionVS = view* model* InputPosition;    

// Clipspace Position    
PositionCS = projection*PositionVS;

gl_Position = PositionCS;    

FragmentShader

// Position on the screen    
vec2 screenPos = PositionCS.xy / PositionCS.w;

// Convert into a texture coordinate   
vec2 texCoord = vec2((1 + screenPos.x) / 2 + (0.5 / resolution.x), (1 - 
screenPos.y) / 2 + (0.5 / resolution.y));

// Sampled value from depth buffer   
vec4 sampledDepth = texture(gDepth, texCoord);

// View-direction   
vec3 viewRay = PositionVS.xyz * (farClip / -PositionVS.z);

// Wallposition in view-space   
vec3 viewPosition = viewRay*sampledDepth.z;

// Transformation from view-space to world-space   
vec3 WorldPos = (invView*vec4(viewPosition, 1)).xyz;

// Transformation from world-space to object-space   
vec3 objectPosition = (invModel*vec4(WorldPos, 1)).xyz;

// Bounds check, discard pixels outside the wall in object-space    
if (abs(objectPosition.x) > 0.5) discard;    
else if (abs(objectPosition.y) > 0.5) discard;    
else if (abs(objectPosition.z) > 0.5) discard;    

// Color to Gbuffer    
gAlbedoSpec = vec4(1, 0, 0, 1);

Code description

invView and invModel are the inverse of the view and model marices, respectively. The matrix inverse calculation is done in the CPU and sent as uniforms to the fragmentshader. farClip is the distance to the camera farplane(set to 3000 here). gDepth is the Gbuffer's depth texture.

The problem

The part of the wall that is conformed by the cube should be shaded red, as shown below its clearly not.

Image link(Cube with bounds): https://gyazo.com/ab6d0db2483a969db932d2480a5acd08

My guess is that the problem is how the viewspace-position is transformed to objectspace-position, but I can not figure it out!

Rabbid76
  • 202,892
  • 27
  • 131
  • 174
  • What is `gDepth`? Is it the depth buffer? The depth buffer has only on channel the "red" channel `.x` or `.r`: `float sampledDepth = texture(gDepth, texCoord).x;` – Rabbid76 Jan 16 '19 at 19:45
  • 1
    Thanks! You are correct and it should be ".r" instead of ".z". It did not help:P But I guess I am one fault shorter! – David Håland Jan 17 '19 at 15:56

1 Answers1

1

You are confusing chalk and cheese. PositionCS is a clip space position and can be converted to a normalized device space position by a Perspective divide:

vec2 ndcPos = PositionCS.xyz / PositionCS.w;

sampledDepth is a depth value (which is by default in range [0, 1]) and can be get by reading the "red" color channel (.r, .x) from the depth buffer texture. The depth can be transformed to a normalized device space Z coordinate by depth*2.0-1.0:

vec2 texCoord = ndcPos.xy * 0.5 + 0.5; 
   // (+ 0.5/resolution.xy;) is not necessary if texture filter is GL_NEAREST

float sampledDepth = texture(gDepth, texCoord).x;

float sampleNdcZ = sampledDepth * 2.0 - 1.0;

At perspective projection and in normalized device space all points with the same x and y coordinate are on the same ray, which starts at the view position.

This means if the depth buffer gDepth was generated with the same view matrix and projection matrix as ndcPos (PositionCS), the you can substitutendcPos.z by the corresponding NDC z-coordinate form the depth buffer (sampleNdcZ) and the point is still on the same view ray.
ndcPos.z and sampleNdcZ are compareable values in the same reference system.

vec3 ndcSample = vec3(ndcPos.xy, sampleNdcZ);

This coordinate can be transformed to a view space coordinate by the inverse projection matrix and a perspective divide.
If NDC points, on the same view ray, are transformed to view space then the XY coordinates will be different. Note, the transformation is not linear because of (* 1/.w). See also OpenGL - Mouse coordinates to Space coordinates.

uniform mat4 invProj; // = inverse(projection)
vec4 hViewPos     = invProj * vec4(ndcSample, 1.0);
vec3 viewPosition = hViewPos.xyz / hViewPos.w;

This can be further transformed by the inverse view matrix to world space and inverse model matrix to object space:

vec3 WorldPos       = (invView * vec4(viewPosition, 1.0)).xyz;
vec3 objectPosition = (invModel * vec4(WorldPos, 1.0)).xyz;

Rabbid76
  • 202,892
  • 27
  • 131
  • 174
  • Thank you so much, I humbly bow before your great knowledge! Link to the correctly redshaded wall: https://gyazo.com/a4b3ba20c1381658e8bda733ab44ffd7. – David Håland Jan 17 '19 at 20:59