1

I have a metal shader (written as a SCNProgram for an ARKit app) that takes a depth map of the current scene captured from smoothedSceneDepth. I would like to use the captured depth information to discard parts of any of my virtual objects that are behind a real world object. However I am having trouble getting the expected fragment depth in my shader.

Here's my basic fragment shader:

struct ColorInOut {
    float4 position [[ position ]];
    float2 depthTexCoords;
};

fragment float4 fragmentShader(
    const ColorInOut in [[ stage_in ]],
    depth2d<float, access::sample> sceneDepthTexture [[ texture(1) ]]
) {
    constexpr sampler textureSampler(mag_filter::linear, min_filter::linear);
    
    // Closest = 0.0m, farthest = 5.0m (lidar max)
    // Seems to be in meters?
    const float lidarDepth = sceneDepthTexture.sample(textureSampler, in.depthTexCoords);
 
    float fragDepth = // ??? somehow get z distance to the current fragment in meters ???

    // Compare the distances
    const float zMargin = 0.1
    if (lidarDepth < fragDepth - zMargin) {
         discard_fragement()
    }

    ...
}

My understanding was that position.z in a fragment shader should be in the range of closest=0 to farthest=1. However when I tried converting this back to real world distances using the current camera planes, the result seem off:

const float zNear = 0.001;
const float zFar = 1000;

float fragDepth = in.position.z * (zFar - zNear);

When I debugged the shader using return float4(fragDepth, 0, 0, 1);, the red is brightest when I am closest to the object and then falls off as I back away. Even if I use fragDepth = 1 - fragDepth, the depth seems to differ from lidarDepth.

Here's using 1 - fragDepth:

Using 1 - fragDepth

(I also tried using the mapping from this answer but wasn't able to get it working)

So my questions are:

  • What coordinate system is in.position.z in?

  • How can I transform in.position.z into a depth value I can compare against the captured depth information I already have? (or vice versa)

Matt Bierner
  • 58,117
  • 21
  • 175
  • 206
  • « Even if I use `fragDepth = 1 - fragDepth`, the depth seems to differ from `lidarDepth` » how different are they? Do you have a screenshot? Have you tried using `SCNSceneBuffer`'s `inverseProjectionTransform`? – mnuages Dec 10 '20 at 11:48
  • There's a fairly hard edge where the depth goes from all black to middle red. I've added a screenshot of this. If `fragDepth` is in meters, this should be much more gradual. The core problem is that I don't know what coordinate space `in.position.z` is in though because I would not expect that `1 - fragDepth` would be required at all – Matt Bierner Dec 11 '20 at 03:06

1 Answers1

0

Ok, I haven't entirely convinced myself the following solution is correct but it seems to work for my very basic application:

In the vertex shader, I write out the position in eye space:

// in vertex shader
out.fragEyePosition = scn_node.modelViewTransform * float4(in.position, 1.0);

In the fragment shader, I can then use -in.fragEyePosition.z for the depth test:

// in fragment shader

float depth = -in.fragEyePosition.z;

const float zMargin = 0.1;
if (lidarDepth - depth < -zMargin) {
    discard_fragment();
}

...

I'd appreciate an explanation of why this works and why my previous attempt using .position did not (or a correction if the solution I here has some issues)

Matt Bierner
  • 58,117
  • 21
  • 175
  • 206