1

I understand how you would do this with a 2D buffer. Just draw two triangles that make a quad that fully encompass the 2D buffer space. That way when the fragment shader runs it runs for all the pixels in the buffer.

Question: How would this work for a 3D buffer?

You could just write a lot of triangles for each cross-section of the 3D buffer. However, if you had a texture that was 1x1x256 that would mean that you would need to draw 256*2 triangles for each slice to iterate over all of the pixels. I know this is an extreme case and there are ways of optimizing this solution. However, I feel like there is a more elegant solution that I am missing.

What I am trying to do: I am trying to make a 3D fluid solver that iterates through each of the pixels of the 3D texture and computes its velocity, density, etc. I am trying to do this via the fragment shader because I am using OpenGL 3.0 which does not use compute shaders.

#version 330 core
out vec4 FragColor;

uniform sampler3D volume;
void main()
{
    // computing the fluid density, velocity, and center of mass
    
    // output the values to the 3D buffer to diffrent color channels:
    fragColor = vec4(density, velocity.xy, centerOfMass);
}
capslpop
  • 73
  • 9
  • How do you plan to produce some form of result from this process? Like, you have a texture, each fragment reads a value from that texture and does... what? What does it do with it? Where does the result of that process go? Are you using Image Load/Store to write the value to some other texture? Are you writing the value to a fragment shader output? The answer to these questions matter. – Nicol Bolas Nov 12 '21 at 23:04
  • @NicolBolas Since I am using OpenGL 3.0 I don't have to option of using compute shaders. So I would be writing as a fragment shader. Also, I added some background of what I am trying to accomplish. – capslpop Nov 13 '21 at 01:30
  • @capslpop that is done by using `for` loops for example see https://stackoverflow.com/a/34708022/2521214 in your case you would have 3 nested loops going through the coordinates range and fetching your texel from volume ... – Spektre Nov 13 '21 at 11:05

1 Answers1

1

At some point in the fragment shader, you're going to write some statement of the form:

vec4 value = texture(my_texture, TexCoords);

Where TexCoords is the location in my_texture that maps to some particular value in the source texture. But... that mapping is entirely up to you. Nobody's making you use gl_FragCoord.xy / textureSize(my_texture). You could just as easily use vec3(gl_FragCoord.x, Y_value, gl_FragCoord.y) / textureSize(my_texture), which puts the Y component of the fragment location in the Z dimension of the texture. Y_value in this case is a value passed from the outside that tells which vertical slice of the 3D texture to use.

Of course, whatever mapping you use to fetch the data must also be used when you write the data. If you're writing via fragment shader outputs, that poses a problem. A 3D texture can only be attached to an FBO as either a single 2D slice or as a layered set of 2D slices, with these slices always being along the Z dimension of the image. So even if you try to read in slices along the Y dimension, it has to be written in Z slices. So you'd be moving around the location of the data, which makes this non-viable.

If you're using image load/store, then you have no problem. You can just write to the appropriate texel (indeed, you can read from it as an image using integer coordinates, so there's no need to divide by the texture's size).

Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982