8

I've managed to implement a logarithmic depth buffer in OpenGL, mainly courtesy of articles from Outerra (You can read them here, here, and here). However, I'm having some issues, and I'm not sure if these issues are inherent to using a logarithmic depth buffer or if there's some workaround I can't think of.

Just to start off, this is how I calculate logarithmic depth within the vertex shader:

gl_Position = MVP * vec4(inPosition, 1.0);
gl_Position.z = log2(max(ZNEAR, 1.0 + gl_Position.w)) * FCOEF - 1.0;
flogz = 1.0 + gl_Position.w;

And this is how I fix depth values in the fragment shader:

gl_FragDepth = log2(flogz) * HALF_FCOEF;

Where ZNEAR = 0.0001, ZFAR = 1000000.0, FCOEF = 2.0 / log2(ZFAR + 1.0), and HALF_FCOEF = 0.5 * FCOEF. C, in my case, is 1.0, to simplify my code and reduce calculations.

For starters, I'm extremely pleased with the level of precision I get. With normal depth buffering (znear = 0.1, zfar = 1000.0), I get quite a bit of z-fighting towards the edge of the view distance. Right now, with my MUCH further znear:zfar, I've put a second ground plane 0.01 units below the first, and I cannot find any z-fighting, no matter how far I zoom the camera out (I get a little z-fighting when it's only 0.0001 (0.1 mm) away, but meh).

I do have some issues/concerns, however.

1) I get more near-plane clipping than I did with my normal depth buffer, and it looks ugly. It happens in cases where, logically, it really shouldn't. Here are a couple of screenshots of what I mean:

Clipping the ground.

Clipping the ground.

Clipping a mesh.

Clipping a mesh.

Both of these cases are things that I did not experience with my normal depth buffer, and I'd rather not see (especially the former). EDIT: Problem 1 is officially solved by using glEnable(GL_DEPTH_CLAMP).

2) In order to get this to work, I need to write to gl_FragDepth. I tried not doing so, but the results were unacceptable. Writing to gl_FragDepth means that my graphics card can't do early z optimizations. This will inevitably drive me up the wall and so I want to fix it as soon as I can.

3) I need to be able to retrieve the value stored in the depth buffer (I already have a framebuffer and texture for this), and then convert it to a linear view-space co-ordinate. I don't really know where to start with this, the way I did it before involved the inverse projection matrix but I can't really do that here. Any advice?

gman
  • 100,619
  • 31
  • 269
  • 393
Haydn V. Harach
  • 1,265
  • 1
  • 18
  • 36

3 Answers3

3

Near plane clipping happens independently from depth testing, but by clipping against the cli space volume. In modern OpenGL one can use depth clamping to make things look nice again. See http://opengl.datenwolf.net/gltut/html/Positioning/Tut05%20Depth%20Clamping.html#d0e5707

datenwolf
  • 159,371
  • 13
  • 185
  • 298
  • I hereby declare problem #1 officially solved! Thank you very much! Do I have to worry about depth clamping reducing the available precision? – Haydn V. Harach Mar 13 '14 at 00:53
  • @HaydnV.Harach: No, depth clamping does not limit precision. It just forces all fragments that would be "nearer" than the near clip plane to lie exactly on the clip plane. Of course this clashes with depth testing, since every fragment closer than near will then pass the depth test (if less-equal is used) or just the first one (if just less is used). – datenwolf Mar 13 '14 at 01:24
  • @HaydnV.Harach: Oh, and you no longer have to write to gl_FragDepth with clamping. Regarding 3) when you read the depth value back and linearize them, for everythign being clipped you'll get them back lying in the near plane. – datenwolf Mar 13 '14 at 01:26
  • would that mess up my deferred shading renderer that relies on the depth buffer to reconstruct view-space position? – Haydn V. Harach Mar 13 '14 at 01:39
  • I tried rendering without writing to gl_FragDepth but I'm still getting the very unacceptable errors. – Haydn V. Harach Mar 13 '14 at 02:15
  • The link is unavailable. – Tara Feb 07 '16 at 05:00
  • @Dudeson: thanks for the heads up. Since the original site went down a couple of months ago I made a mirror. I updated the question with the link on my mirror. – datenwolf Feb 07 '16 at 11:06
  • @datenwolf: Thanks a lot! – Tara Feb 07 '16 at 19:43
2

1) In the equation you used: gl_Position.z = log2(max(ZNEAR, 1.0 + gl_Position.w)) * FCOEF - 1.0; There should not be ZNEAR, because that's unrelated to it. The constant there is just to avoid log2 argument to be zero, e.g you can use 1e-6 there.

But otherwise the depth clamping will solve the issue.

2) You can avoid using gl_FragDepth only with adaptive tesselation, that keeps the interpolation error in bounds. For example, in Outerra the terrain is adaptively tesselated, and thus it never happens that there would be a visible error on the terrain. But the fragment depth write is needed on the objects when zooming in close, as the long screen-space triangles will have the discrepancy between linearly interpolated value and the correct logarithmic value quite high.

Note that latest AMD drivers now support the NV_depth_buffer_float extension, so it's now possible to use the Reversed floating-point depth buffer setup instead. It's not yet supported on Intel GPUs though, as far as I know.

3) The conversion to the view space depth is described here: https://stackoverflow.com/a/18187212/2435594

Community
  • 1
  • 1
  • 1) I tried using 1e-6, 1e-1, 1, etc, instead of ZNEAR, but I pretty much get the same problem no matter what I put in. Ironically, 1e-6 gives me more error than 1e-1 does. I just used ZNEAR because it's a constant I defined and generally prefer to use constants than magic numbers. 2) Good to note, but I can't really tesselate all of my geometry. Walls and such tend to be large polygons, and I can't expect all of my users to have tessellation shaders available. I considered using the reversed floating-point depth buffer, but it's only viable if I can use it with a stencil buffer (cont.) – Haydn V. Harach Mar 13 '14 at 18:03
  • without wasting 24 bits. 3) Thank you! – Haydn V. Harach Mar 13 '14 at 18:04
  • regarding 3), the formula posted did work for me. When I rendered both (`fragColor = vec4(reconstructedDepth, positionBuffer.z, 0.0, 1.0);`) I get uniform yellow across the majority of the scene, which is what I expect to see with correct results, EXCEPT for things which are very close to the camera, which appear more red. I suspect that depth clamping might have something to do with this. – Haydn V. Harach Mar 13 '14 at 18:22
  • Looks like I spoke to soon, when I compare it more properly (`0.5 + (reconstructedDepth - positionBuffer.z)`), the scene looks very white (meaning the reconstructed depth has a much higher value than the actual depth). – Haydn V. Harach Mar 13 '14 at 19:08
  • On all modern cards the depth and stencil data are separate, and the OpenGL identifier that suggests the padding is historic & now misleading. – camenomizoratojoakizunewake Mar 14 '14 at 06:22
  • Did you change the sign on the reconstructedDepth? – camenomizoratojoakizunewake Mar 14 '14 at 06:24
  • I didn't change the sign on reconstructedDepth, but I did change the sign on positionBuffer.z (I set it to `vec3(positionBuffer.xy, -(positionBuffer.z + 1.0)` which produces something I can actually see (since z values are normally negative)). – Haydn V. Harach Mar 14 '14 at 07:19
  • That +1.0 there suggests you are expecting/comparing clip-space values, right? But the reverse provides a camera-space depth, i.e. the physical depth in world-space units. – camenomizoratojoakizunewake Mar 14 '14 at 12:04
  • To be honest, the +1.0 suggests that I tinkered with it until I produced something visible. – Haydn V. Harach Mar 14 '14 at 19:10
1

Maybe a little late for answering. In any case, to reconstruct Z usign the log2 version:

realDepth = pow(2,(LogDepthValue + 1.0)/Fcoef) - 1.0;
Oxzy Mot
  • 11
  • 2