0

I have a 2 stage volume renderer , I'm using from DVR method to render the volume , But depth buffer is not right for that volume because i used from box for raycasting.

Actually i have a box and i should calculate correct depth values according to volume data.

I have this in vertex shader :

out float DEPTH ;
...
DEPTH = gl_Position.z / gl_Position.w;

And in fragment:

gl_FragDepth = (1.0 - 0.0) * 0.5 * DEPTH + (1.0 + 0.0) * 0.5;

And they are right for the box, my main question is how i can add a bit to DEPTH to make correct depth values?

I also used from (distanceToAdd/100 + DEPTH), But it is wrong.
any idea?

Rabbid76
  • 202,892
  • 27
  • 131
  • 174
masoud khanlo
  • 189
  • 1
  • 14
  • Are you using a linear or logarithmic depth buffer? –  Aug 03 '17 at 05:42
  • 1
    Most likely not a duplicate - this is about volume rendering with raycasting, not simple polygons. What I'm confused about is what you actually mean by depth. Since you are rendering a volume where points might be half-transparent, it is difficult to assign a depth to any point that is rendered on your cube. Do you mean the number of ray marching steps you need to do? Or do you mean the length of the segment you have to march along? Or the coordinates at which you enter and exit your cube? – Tobias Ribizel Aug 03 '17 at 05:45
  • @frank yes , that is linear – masoud khanlo Aug 03 '17 at 05:46
  • @TobiasRibizel in fact we have ray termination position and i can get ray length and i want add the ray length to depth buffer to make it correct – masoud khanlo Aug 03 '17 at 05:49
  • 1
    @masoudkhanlo Why do you want to do that though? For a volume with semi-transparent parts, there is no defined *depth* of a single pixel, because every pixel contains 'light' from many points along the ray (unless you are doing a [maximum intensity projection](https://en.wikipedia.org/wiki/Maximum_intensity_projection)). There are two possible solutions: Either turn z writing off altogether and draw the volumes last or use an arbitrary depth within the bounds of the cube (normally the minimum depth, i.e. the front). Note that two intersecting volumes would be incredibly difficult to render. – Tobias Ribizel Aug 03 '17 at 05:55
  • @Tobias I have to show intersection between the other volumes and meshes , and volumes always behave like box! – masoud khanlo Aug 03 '17 at 06:31
  • As I said, volume-volume intersection is quite complicated and I wouldn't try to implement it using shaders. If you want volume-mesh intersection, a simple solution would be to first render the solid objects and in a second pass render the cubes where you clamp the ray steps to the area in front of the solid meshes. Is this what you want? – Tobias Ribizel Aug 03 '17 at 06:49
  • @Tobias Yes ,I only need to volume mesh intersection , But actually i have box and a texture on it , Thus intersection between volume and plane is like box-plane – masoud khanlo Aug 03 '17 at 06:57

1 Answers1

4

After some clarifications the main question seems to be How to combine solid meshes and volumes such that when intersecting, the volume is only rendered up to the surface of the solid object

To answer this, we need to render the scene in two passes

  1. Render all solid objects into a frame buffer object (FBO) with an attached depth buffer
  2. Render all volumes with modified step count and initial color.

I'll explain the details in the following sections

Modified step count

Normally, you compute two points for your ray marching algorithm: The near and far intersection of the ray with your cube. This can be accomplished by transforming the ray x = o + t*d into cube space and taking the minimum t_min and maximum value t_max for t. If you want solid objects to intersect your volume, you need to make sure that these points are adjusted correctly.

  • The near point normally doesn't have to be adjusted, since the pixel would not be rendered if the front of the cube was behind the surface of the solid object.
  • The far point needs to be adjusted based on the depth value of the current pixel:
    • Compute the world space coordinates of the object's surface at the current pixel (by applying the inverse projection and view transform)
    • Compute the cube space coordinates p of the object's surface by applying the cube's inverse world transformation.
    • Compute t_surf such that p = o + t_surf * d, i.e. calculate the distance at which the ray reaches the solid surface
    • Take the minimum between t_max and t_surf and make it your new maximum distance t_max from the ray origin. This way, you ignore all parts of the volume that lie behind the solid surface.

Modified initial color

With the current modification, the volume gets correctly culled at the solid surface, but still we would only see a black background, not the solid surface's color shining through.

To solve this, simply stick to back-to-front composition and set the initial value to the color of the surface (instead of just leaving it black like in the normal case). This way, if the volume is completely transparent at a certain point, you see the solid surface, if it is somewhat opaque, it gets mixed with the volume in front of the surface.

Community
  • 1
  • 1
Tobias Ribizel
  • 5,331
  • 1
  • 18
  • 33