I want to render a 2D image rendered from a 3D model with model, view and perspective projection transforms. However for each pixel/fragment in the output image, I want to store a representation of the index of the vertex in the original mesh which is physically closest to the point where the ray from the camera centre intersects the mesh.
I have a good understanding of the math involved in doing this and can build appropriate raytracing code 'longhand' to get this result but wanted to see if it was possible to achieve in OpenGL, via e.g. a fragment shader.
I'm not an OpenGL expert but my initial reading suggests a possible approach being to set a specific render target for the fragment shader that supports integral values (to store indices) and passing the entire mesh coordinates as a uniform
to the fragment shader then performing a search for the nearest coordinate after back transforming gl_FragCoord
to model space.
My concern is that this would perform hideously - my mesh has about 10,000 vertices.
My question is: Does this seem like a poor use case for OpenGL? If not, is my approach reasonable> If not, what would you suggest insetad.
Edit: While the indicated answer does contain the kernel of a solution to this question it is not in any way a duplicate question; it's a different question with a different answer that have common elements (raytracing). Someone searching for n answer to this question is highly unlikely to find the proposed duplicate.