1

Mostly all answers I've found involve multiplying a vector of normalised device coordinates by a inverse(projection * view) matrix, however every example I've tried results in at least two invalid things..

  1. No variation of the worldray.xy at varying ndc.z ranges, preventing me from generating a direction vector at varying near/far planes
  2. An invalid worldray.z

Can someone provide working generation of world ray from mouse coordinates?

Edit: I've added the code I'm using, if I use inverse z is completely off from where I expect it to be, at least with affineInverse I get an accurate z for near

mat4 projection = perspective(radians(fov), (Floating)width / (Floating)height, 0.0001f, 10000.f);
vec3 position = { 0, 0, -2 };
vec3 direction = { 0, 0, 1 };
vec3 center = position + direction;
mat4 view = lookAt(position, center, up);
vec2 ndc = {
 -1.0f + 2.0f * mouse.x / width,
  1.0f + -2.0f * mouse.y / height
};
vec4 near = { ndc.x, ndc.y, 0, 1 };
vec4 far = { ndc .x, ndc .y, -1, 1 };
mat4 invP = inverse(projection);
mat4 invV = inverse(view);
vec4 ray_eye_near = invP * near;
ray_eye_near.z = near.z;
vec4 ray_world_near = invV * ray_eye_near;
ray_world_near /= ray_world_near.w;
printf("ray_world_near x: %f, y: %f, z: %f, w: %f\n\r", ray_world_near.x, ray_world_near.y, ray_world_near.z, ray_world_near.w);
vec4 ray_eye_far = invP * far;
ray_eye_far.z = far.z;
vec4 ray_world_far = invV * ray_eye_far;
ray_world_far /= ray_world_far.w;
printf("ray_world_far  x: %f, y: %f, z: %f, w: %f\n\r", ray_world_far.x, ray_world_far.y, ray_world_far.z, ray_world_far.w);

Here is a screenshot of what I'm experiencing world ray around triangle

Edit 2: These are the numbers I get if using inverse instead of affineInverse and dividing by w world ray around triangle using inverse

  • Is this for 3D or 2D? – Irelia Apr 04 '22 at 03:35
  • This is for 3D. – Śaeun acreáť Apr 04 '22 at 03:39
  • 2
    see [Computing&Plotting 3D Points on Surface of a Mesh from mouse position](https://stackoverflow.com/a/71382169/2521214) and [GLSL 3D Mesh back raytracer](https://stackoverflow.com/a/45140313/2521214) see the vertex shader ... Also how is this question different to your previous one https://stackoverflow.com/q/71631833/2521214 ? 1. why would `world.xy` change with `ndc.z` ? the `z` from depth buffer affect the ray hit position but not the ray position nor direction... 2. what do you mean invalid `worldray.z`? – Spektre Apr 04 '22 at 06:48
  • 1
    Does this answer your question? [Usable World Ray from 2D Mouse Coordinates?](https://stackoverflow.com/questions/71631833/usable-world-ray-from-2d-mouse-coordinates) – Spektre Apr 04 '22 at 06:48

1 Answers1

1

This is the function I use to generate a normalized ray from screen space into the scene:

vec3 rayCast(double xpos, double ypos, mat4 view, mat4 projection, unsigned SCR_WIDTH, unsigned SCR_HEIGHT) {
    // converts a position from the 2d xpos, ypos to a normalized 3d direction
    float x = (2.0f * xpos) / SCR_WIDTH - 1.0f;
    float y = 1.0f - (2.0f * ypos) / SCR_SHEIGHT; 
    // or (2.0f * ypos) / SCR_HEIGHT - 1.0f; depending on how you calculate ypos/lastY
    float z = 1.0f;
    vec3 ray_nds = vec3(x, y, z);
    vec4 ray_clip = vec4(ray_nds.x, ray_nds.y, -1.0f, 1.0f);
    // eye space to clip we would multiply by projection so
    // clip space to eye space is the inverse projection
    vec4 ray_eye = inverse(projection) * ray_clip;
    // convert point to forwards
    ray_eye = vec4(ray_eye.x, ray_eye.y, -1.0f, 0.0f);
    // world space to eye space is usually multiply by view so
    // eye space to world space is inverse view
    vec4 inv_ray_wor = (inverse(view) * ray_eye);
    vec3 ray_wor = vec3(inv_ray_wor.x, inv_ray_wor.y, inv_ray_wor.z);
    ray_wor = normalize(ray_wor);
    return ray_wor;
}

For example,

// at the last update of the mouse cursor, `lastX, lastY`
vec3 rayMouse = rayCast(lastX, lastY, viewMatrix, projectionMatrix, SCR_WIDTH, SCR_HEIGHT);

This will give you back a normalized ray from which you can get a parametric position along that ray into the scene with glm::vec3 worldPos = cameraPos + t * rayMouse, for example when t=1, worldPos would be 1 unit along the mouse cursor into the scene, you can use a line rendering class to better see what is happening.

Note: glm::unproject can be used to achieve the same result:

glm::vec3 worldPos = glm::unproject(glm::vec3(lastX, lastY, 1.0), 
                               viewMatrix, projectionMatrix, 
                               glm::vec4(0,0,SCR_WIDTH, SCR_HEIGHT);
glm::vec3 rayMouse = glm::normalize(worldPos-cameraPos);

Note: These functions cannot be used to get an exact world space position of a fragment at the mouse coordinates, for that you have three options AFAIK:

  1. you can use the above calculated ray with a ray-triangle intersection function to check all triangles for an intersect, or if you are only looking for a more approximate approach you could try ray-sphere intersections or ray-bounding-box intersections.
  2. Reconstruct the world space position using the depth buffer, either in the fragment shader, this is most commonly used when for world space lighting If you have a deferred renderer:
  3. You could use glReadPixels to get the depth value at the mouse/texture coordinate, which you can convert back from NDC to world space.

Extra: 4. if you are doing object picking, you can get pixel perfect GPU mouse picking by using a buffer to tag each different object in the scene and use glReadPixels to ID the object from its unique color tag.

I typically use option 1. for 3D math workflows and find it more than suffices for things like object picking, dragging, drawing 3D lines, etc...

jackw11111
  • 1,457
  • 1
  • 17
  • 34