2

OpenGL spec:

OpenGL

It says: However, depth values for polygons must be interpolated by (14.10).

Why? Are the z coordinates depth values in camera space? If so, we should use perspective correctly barycentric coordinates to interpolate them, isn't it?(like equation 14.9)

Update:
So the z coordinates are NDC coordinates(which already divided by w). I have a small demo which implement a rasterizer. When I use linear interpolation of the NDC z coordinates, the result is a bit unusual(image below). While I use perspective correctly interpolation of camera z coordinates, the result is ok.

enter image description here

This is the perspective projection matrix I use: enter image description here

1 Answers1

2

Why? Are the z coordinates depth values in camera space? If so, we should use perspective correctly barycentric coordinates to interpolate them, isn't it?

No, they are not. They are in window space, meaning they already have been divided by w. It is correct that if you wanted to interpolate camrea space z, you would have to apply perspective correction. But for NDC and window space Z this would be wrong - after all, the perspective transformation (as achieved by perspective projection matrix and perspective divide) still maps straight lines to straight lines, and flat trinagles to flat triangles. That's why we use the hyperbolically distorted Z values as depth in the first place. This is also a property that is exploited for the hierarchical depth test optimization. Have a look at my answer here for some more details, including a few diagrams.

derhass
  • 43,833
  • 2
  • 57
  • 78
  • Have a look at the updated question. – Jiacheng Wang Nov 01 '22 at 13:41
  • @JiachengWang: It is not clear what you're trying to do here. The NDC Z values are hzperbolically distorted (as I explained in more detail in the linked answer). If you want to visualize these, then the resulting image will mostely have the maximum value, as almost the whole depth range of your frustum will be mapped to the `[0.99..., 1]` range. If you want to visualize linear depth, you either can just perpsective-corrected interpolate camera space z, or you interpolate window-space z linearly and transfrom the result back to linear depth by applying the inverse of the depth mapping function. – derhass Nov 01 '22 at 16:16
  • Like you said, I interpolate the window-space z linearly, then I put the interpolated z value directly in z-buffer. Is this wrong? – Jiacheng Wang Nov 02 '22 at 00:45
  • Repo: https://github.com/Games-Beginner/pa3 – Jiacheng Wang Nov 02 '22 at 01:22
  • @JiachengWang "Like you said, I interpolate the window-space z linearly, then I put the interpolated z value directly in z-buffer. Is this wrong?" No, that's exactly how the depth buffer is supposed to work. – derhass Nov 02 '22 at 19:28
  • No you seem to have a completely different question that you intially posted. If you want debugging hewlp on your software rasterizer's depth test implementation, you can open a specific question for that. But you should go the full way of including a [_minimal_, _complete_ example](https://stackoverflow.com/help/minimal-reproducible-example) showing the issue. Questions just posting a line to your full code base aren't helpful, and are actually off-topic on this site. – derhass Nov 02 '22 at 19:32
  • You said "almost the whole depth range of your frustum will be mapped to the [0.99..., 1] range", so I guess that "perspective-corrected interpolate of camera space z" have more issue with the precision(e.g. z-fighting) than "linear interpolate of window space z", right? – Jiacheng Wang Nov 03 '22 at 03:15
  • I found the problem with my code~~ Just like what I said in last comment. It's the precision issue. When I convert all float to double, the linear interpolation of screen space z coordinates is just ok! – Jiacheng Wang Nov 03 '22 at 05:46
  • So as you said, OpenGL use linear interpolate of screen space z coordinates, then what does OpenGL do to avoid the precision issue? Or, why OpenGL doesn't use perspective-corrected interpolate of the camera space z coordinates(while it will has less precision issue)? Maybe this is off topic, but I want to know about this. Thank you. – Jiacheng Wang Nov 03 '22 at 05:50
  • Floating point precision is usually enough for the depth interpolation. Default depth buffer format for is even 24bit integer. Unless you did set some extreme values for your projection parameters, you won't run into issues here. So either your implementation is bstill buggy, your parameters too extreme, or your implementation is numerically very unstable, you won't need doubles. GPUs rely on linear z interpolation for all sorts of optimizations, especially hierarchical z buffers. A floating point depth buffer with reversed-z mapping can inrcease the usable precison by quite a big deal. – derhass Nov 03 '22 at 19:03
  • So without reversed-z(which like I do), float is not enough, right? – Jiacheng Wang Nov 04 '22 at 00:13
  • @JiachengWang What makes you think you're using reversed z? Nothing you have posted so far points into that direction. But that is also besides the point. Standard projection is also fine with single precision floats, especially for such a toy scene. There is absolutely no reason why you would need double precision for that, apart from potential bugs in your code. – derhass Nov 04 '22 at 00:39
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/249309/discussion-between-jiacheng-wang-and-derhass). – Jiacheng Wang Nov 04 '22 at 00:48