1

My goal is to implement a way of selecting objects in a scene by mouse, for a game. My theory is that all I need do is ask for the points (mouseX, mouseY, -1) and (mouseX, mouseY, 1) in Native Device Coordinates, 'un'transform them through the view matrix first (so both points still overlap from the camera's perspective, then 'un'transform them through the perspective matrix so I get points I can use with a simple raycast against model bounding boxes. (I might have mixed up the order of 'un'transformations, correct me if I'm wrong!)

I believe I can convert the opposite way around, from the perspective scene to NDC, using this algorithm (I am still writing the test scene, so I haven't had a chance to see if this is correct though)

glm::vec3 Camera::convertPerspectivePointToDeviceCoordinates(glm::vec3 worldPoint){
    glm::vec4 vector = m_finalViewTransform * glm::vec4(worldPoint, 1.0f);
    vector.x /= vector.w;
    vector.y /= vector.w;
    vector.z /= vector.w;
    return glm::vec3(vector.x, vector.y, vector.z);
}  

But to do the reverse is puzzling me... how would one do that? Any answer that takes advantage of glm for matrice math would be appreciated, but I can work from just math okay as well.

Also, I'm using OpenGL without the fixed pipeline, if that changes anything. I have the matrices for view and projection stored if needed

Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
Anne Quinn
  • 12,609
  • 8
  • 54
  • 101
  • 2
    Just use the inverted transformation matrix. `glm::inverse(m_finalViewTransform)`. Maybe checkout this question: http://stackoverflow.com/questions/7692988/opengl-math-projecting-screen-space-to-world-space-coords-solved – dari Dec 07 '14 at 19:56

0 Answers0