1

I am rendering an object onto the screen. I need the visible XYZ coordinate and the Normal which are calculated in the vertex shader for every pixel for further calculations.

Is it possible to acquire those values?

The closest I could think of is to use offscreen rendering (How to render offscreen on OpenGL?) to render each coordinate separately(I would have to render 6 times, not very efficient). For that I would have to split a float into byte values. Is it possible to use something like

(value & 0x0000FF00) >> 8)

in the vertex shader?

Edit: My question was not clear.

Additional info: I want to retrieve the XYZ world coordinates and corresponding normals for every pixel, e.g. (e.g. X = -0.2, Y = 0.5, Z = 1.3; NX = -0.1, NY = 0.8, NZ = 0.1)

So far my pipeline is very similar to what "Boris" has posted in his answer.

Community
  • 1
  • 1
themrx
  • 13
  • 3

3 Answers3

1

What you're trying to do is rendering to a so called Geometry Buffer. It's a common step in Deferred Rendering so I strongly suggest you look at DR tutorials, to see how it's done. Google finds a lot, but this one was featured on the opengl.org news section: http://ogldev.atspace.co.uk/www/tutorial35/tutorial35.html

datenwolf
  • 159,371
  • 13
  • 185
  • 298
0

If you set varying variables with those values in vertex shader then in fragment shader you'll get those values interpolated for each pixel.

Note that in OpenGL 4.3 in and out variables are used instead of varying.

Vasaka
  • 1,953
  • 1
  • 19
  • 30
0

In order to optain the information on the CPU, you indeed have to render on textures using a Frame Buffer Object (FBO). You can attach several Texture Images to a single FBO, by using several GL_COLOR_ATTACHMENTi, and draw in all at once in your fragment shader. This is called Multiple Render Target (MRT). A typical use seems to be deferred shading, according to datenwolf and Wikipedia (I never did that personally, but I did MRT, for GPU picking).

Vertex shader:

First, you need to transfer the information from the vertex shader to the fragment shader.

#version 330

uniform mat4 pvmMatrix;
in vec3 position;
in vec3 normal;

out vec3 fragPosition; 
out vec3 fragNormal;        

void main()
{
    fragPosition = position; 
    fragNormal = normal; 
    gl_Position = pvmMatrix * vec4(position, 1.0);
}

Fragment shader

Then, you can render this information in the fragment shader by specifying different output.

#version 330

in vec3 fragPosition; 
in vec3 fragNormal;    

layout(location=0)out vec3 mrtPosition;
layout(location=1)out vec3 mrtNormal;

void main()
{
    mrtPosition = fragPosition;
    mrtNormal = fragNormal;
}

layout(location=0) specify that the render target will be GL_COLOR_ATTACHMENT0 (location=1 is for GL_COLOR_ATTACHMENT1, and so on). The name you give to the variable doesn't matter.

Creating the FBO

In your C++ code, you have to setup a FBO with the multiple render targets. For brevity, I will not give what is common to any FBO, use the first link for this. What is important here is:

Generate several textures:

// The texture for the position
GLuint texPosition;
glGenTextures(1, &texPosition);
glBindTexture(GL_TEXTURE_2D, texPosition);
// [...] do the glTexParameterf(...) that suits your needs
glTexImage2D(GL_TEXTURE_2D, 0, 
             GL_RGB32F, // this should match your fragment shader output. 
                        // I used vec3, hence a 3-componont 32bits float
                        // You can use something else for different information
             viewportWidth, viewportHeight, 0, 
             GL_RGB, GL_FLOAT, // Again, this should match
             0); 

// the texture for the normal
GLuint texNormal;
glGenTextures(1, &texNormal);
// [...] same as above

Note how you can used different datatype for the informationyou want to store on your texture. Here, I used vec3 in the fragment shader, hence I have to use GL_RGB32F for the texture creation. Check here for an exhaustive list of types you can use (For instance, I use unsigned int to perform GPU picking).

Attach these textures to different color attachments of your FBO:

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, 
                       GL_TEXTURE_2D, texPosition, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, 
                       GL_TEXTURE_2D, texNormal, 0);

Of course, you may want to add a depth attachment, and all other "classical" things you would do with a FBO.

Drawing

Next step, you have to draw your scene using the FBO.

// set rendering destination to FBO
glBindFramebuffer(GL_FRAMEBUFFER, myFBO);

// Set which GL_COLOR_ATTACHMENTi are the targets of the fragment shader
GLenum buffers_to_render[] = {GL_COLOR_ATTACHMENT0,GL_COLOR_ATTACHMENT1};
glDrawBuffers(2,buffers_to_render); // `2` because there are two buffers

// Draw your scene
drawScene();

At this point, all your information you are looking for is stored in the appropriate format (three-component 32bits floats, no need to convert to byte values as in your question) in the form of textures, on the GPU.

Retrieving the data on CPU

Finally, you can get back the data in texture from the GPU to the CPU by using the glRead... methods.

float * positions = new float[3*width*height];
float * normals =   new float[3*width*height];
glBindFramebuffer(GL_FRAMEBUFFER, 0);         // unbind the FBO for writing (and reading)
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo_); // bind the FBO for reading
glReadBuffer(GL_COLOR_ATTACHMENT0);           // set which attachment to read from
glReadPixels(0, 0, width, height,             // read the pixels
             GL_RGB, GL_FLOAT,                // use the right format here also
             positions);
glReadBuffer(GL_COLOR_ATTACHMENT1);           // repeat for the normals
glReadPixels(0, 0, width, height,            
             GL_RGB, GL_FLOAT,           
             normals);

And you should be good to go :-)

Boris Dalstein
  • 7,015
  • 4
  • 30
  • 59
  • this is pretty much how far I got. How do I return the fragPositionScene and fragNormal values now? I cannot just write them to the fragColor. I don't want to do any computations with them, just retrieve the values (e.g. X = -0.2, Y = 0.5, Z = 1.3; NX = -0.1, NY = 0.8, NZ = 0.1) – themrx Jun 28 '13 at 08:06
  • @themrx What do you mean by retrieve the value? On the CPU? – Boris Dalstein Jun 28 '13 at 08:16
  • Yes, i want to use the points as a pointcloud to perform further calculation on the cpu – themrx Jun 28 '13 at 08:30
  • @themrx: I will answer on how to do (indeed, you have no other choice that using FBO for offscreen rendering, but depending on your requirement no need for 6 of them), but may I ask first why you ever want to do this? Because I really don't see how this can be any useful, and then probably this implementation choice is not the right way to tackle the high-level problem you are trying to solve. – Boris Dalstein Jun 28 '13 at 08:45
  • I want to varify an algorithm which we developed for use with kinect fusion data. Right now i aim to simulate the input and observe the output of our algorithm, which requires rgb, 3d-points and normals as input – themrx Jun 28 '13 at 09:09
  • @themrx: see my edit, I think this will give you all the information you need to get a very good start, then look around on the internet for how to use FBO in general, the specifications of the different methods I present here. Also, the link given by datenwolf looks good :-) – Boris Dalstein Jun 28 '13 at 21:27