1

I am trying to calculate depth relative to the object.
Here is a good solution to retrieve depth relative to camera : Depth as distance to camera plane in GLSL

varying float distToCamera;

void main()
{
    vec4 cs_position = glModelViewMatrix * gl_Vertex;
    distToCamera = -cs_position.z;
    gl_Position = gl_ProjectionMatrix * cs_position;
}

With this example the depth is relative to the camera.
But I would like get the depth relative to the object. I would like the same depth and value if I am near from the object or if I am far.

Here is an example of what I am trying to achieve. On the left you can see that the depth is relative to the camera. And on the right even if the camera moves back from the object, the depth remains the same because it is dependant to the object.

enter image description here

Community
  • 1
  • 1
MaT
  • 1,556
  • 3
  • 28
  • 64
  • Very vaguely, I get the impression what you are actually thinking of is *linear* depth. Where the distribution of precision doesn't vary with distance. You can linearize the perspective depth buffer pretty simply, or just use the `w` coordinate as was done many years ago before perspective Z-buffering became the de-facto standard. – Andon M. Coleman Oct 12 '14 at 20:13
  • 1
    The formulation is odd - "same depth and value if I am near from the object or if I am far" - sounds like a constant value. Maybe if you wrote what do you need it for. Btw you do not need to separate MVP and projection matrix - gl_Position.w ends up containing the depth from the camera. – camenomizoratojoakizunewake Oct 12 '14 at 21:25
  • Please do not double-post (http://gamedev.stackexchange.com/questions/85787/shader-calculate-normalized-depth) – Kromster Oct 13 '14 at 05:45
  • Sorry if the formulation is quite misstated. I made a small gif to illustrate what I am trying to achieve. – MaT Oct 13 '14 at 06:39

2 Answers2

4

Distance from the fragment to camera is always dependent to the position of the camera. It is just impossible to make value constant if it is not a constant.

I'd advise you to rework the requirements for what you are developing. You have to clarify for yourself what depth you need, it seems that it doesn't rely on camera position.

I would advise you to think about introducing a plane, that would be a reference for calculating the depth. Parameters, that specifying position of the plane in space can be passed to the shader as uniforms.

How to calculate the distance from a point to the plain? The depth can be calculated as the length of a perpendicular to plane dropped from a fragment. Let's we have a arbitrary point p that lays on the plane and a normal n to that plane. The distance would be:

d = dot(f, n) - dot(p, n)

where f - the position of the fragment.

A simple shader that does this calculation is listed below:

uniform vec3 u_point;    
uniform vec3 u_normal;    
uniform float u_unitLength;

varying vec4 v_worldPosition;   

void main( void ) 
{        
    float refPoint = dot(u_point, u_normal);
    float testPoint = dot(v_worldPosition.xyz, u_normal);
    float depth = testPoint - refPoint;
    vec3 color = vec3(depth / u_unitLength);
    gl_FragColor = vec4( color, 1.0 );
}

You should be aware, that you need to pass fragment position varying v_worldPosition from the vertex shader to fragment shader. I've wrote a simple example for demonstration .

Some optimizations are possible. You can do dot(p, n) not in the shader, but precompute it. For more details read this.

So, it is better to pass the coefficients of plane equation in general form, rather then in point-normal form. The optimized shader would be:

uniform vec4 u_abcd;
uniform float u_unitLength;

varying vec4 v_worldPosition;   

void main( void ) 
{        
    float depth = dot(u_abcd.xyz, v_worldPosition.xyz) + u_abcd.w;
    vec3 color = vec3(depth / u_unitLength);
    gl_FragColor = vec4( color, 1.0 );
}

The example of using optimized shader.

You can rotate the plane with your camera, but stay it on the same distance to the object. There is an example. The result is just as you've demonstrated in your gif animation.

Pidhorskyi
  • 1,562
  • 12
  • 19
  • I like the plane idea, unfortunately you are changing uniforms not in the shader. Is it possible to adapt the normal value according to the camera inside the shader ? – MaT Nov 06 '14 at 13:50
3

I believe what you're after is a depth relative to the object, rather than to the camera. To find an eye-space Z distance from the object's origin...

vec4 esVert = glModelViewMatrix * gl_Vertex;
vec4 esObjOrigin = glModelViewMatrix * vec4(0,0,0,1);
distToCamera = -esVert.z;
distToOrigin = -esObjOrigin.z;
originToVertexZ = distToOrigin - distToCamera;

Now, originToVertexZ is relative to the object's position and not the camera so it won't change unless you rotate the camera.

enter image description here

The eye-space origin distance, esObjOrigin.z, could be precomputed and passed in (avoiding the extra matrix multiply). I.e. compute -(glModelViewMatrix * vec4(0,0,0,1)).z in the application code and pass it in as a uniform variable.

Depending on the application you might want to precompute a bounding sphere, providing a known upper and lower bound for originToVertexZ. For example if you always want the smooth zero to one transition shown in your example, find bounds=max(length(vertex)) for each vertex in the object and then normalizedZ = 0.5*originToVertexZ/bounds+0.5.

jozxyqk
  • 16,424
  • 12
  • 91
  • 180
  • This is exactly what I want, thank you very much. But could you elaborate. _The eye-space origin, esObjOrigin.z, could be precomputed and passed in. Depending on the application you might want to precompute a bounding sphere, providing a known upper and lower bound for originToVertexZ._ – MaT Oct 13 '14 at 11:33
  • @MaT: Please update your question to clarify the problem. As it is written now, it asks a different thing that one you accept as "This is exactly what I want". – Kromster Oct 15 '14 at 04:27
  • @KromStern I agree in this case, but on occasion leaving a question worded strangely can help others who don't know the exact terms to search for. – jozxyqk Oct 15 '14 at 04:36
  • @jozxyqk: With the OP, it is not about terminology (which indeed may be expressed in common words more commonly). OP description is just very vague and misleading. – Kromster Oct 15 '14 at 04:44
  • @KromStern I'll reformulate my question but I first tried to formulate it with my own words as I didn't know the right terminology. – MaT Oct 15 '14 at 06:38
  • @MaT: It's not about terminology, it all can be expressed in common words. – Kromster Oct 15 '14 at 07:54