1

I have a problem with my ray generation that I do not understand. The direction for my ray is computed wrongly. I ported this code from DirectX 11 to Vulkan, where it works fine, so I was surprised I could not get it to work:

vec4 farPos = inverseViewProj * vec4(screenPos, 1, 1);
farPos /= farPos.w;

r.Origin = camPos.xyz;
r.Direction = normalize(farPos.xyz - camPos.xyz); 

Yet this code works perfectly:

vec4 nearPos = inverseViewProj * vec4(screenPos, 0, 1);
nearPos /= nearPos.w;
vec4 farPos = inverseViewProj * vec4(screenPos, 1, 1);
farPos /= farPos.w;

r.Origin = camPos.xyz;
r.Direction = normalize(farPos.xyz – nearPos.xyz);

[Edit] Matrix and camera positions are set like this:

const glm::mat4 clip(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f, 0.0f, 0.0f, 0.5f, 1.0f);
projMatrix = clip * glm::perspectiveFov(FieldOfView, float(ViewWidth), float(ViewHeight), NearZ, FarZ);
viewMatrix = glm::inverse(glm::translate(glm::toMat4(Rotation), -Position));

buffer.inverseViewProjMatrix = glm::inverse(projMatrix * viewMatrix);
buffer.camPos = viewMatrix[3];

[Edit2] What I see on screen is correct if I start at the origin. However, if I move left, for example, it looks as if I am moving right. All my rays seem to be perturbed. In some cases, strafing the camera looks as if I am moving around a different point in space. I assume the camera position is not equal to the singularity of my perspective matrix, yet I can not figure out why.

I think I am misunderstanding something basic. What am I missing?

Selmar
  • 722
  • 1
  • 5
  • 12
  • At first glance, I don't see an obvious error up there. Could you maybe describe in a bit more detail how exactly "does not work" manifests in your case? I assume you use the same matrices successfully, e.g., to draw geometry in the same application? Are you aware that in Vulkan, as opposed to Direct3D, the clipspace y-axis points down? – Michael Kenzel Oct 14 '18 at 20:53
  • 1
    Apart from that, I'd suggest that you use `nearPos` as your ray origin rather than the camera system origin. Doing so has the advantage that your code will work not only with centered perspective projections but with any projection that can be described by a 4×4 matrix. Furthermore, your rays will only hit objects that are actually in front of the near plane just as if they had correctly been clipped like the rasterization pipeline would do… – Michael Kenzel Oct 14 '18 at 20:58
  • I have not used the same matrices in another way, except for a fractal raytracer; there may very well be something incorrect. I am aware of the inversion, I'll update the question with the projection matrix code. If I use the nearPos, then calculating the ray direction as I do there wouldn't work, would it? I fancied avoiding the second matrix multiplication. On the other hand, it's probably more efficiently done without matrices anyway... – Selmar Oct 14 '18 at 21:01
  • As far as I know, in vulkan, clip space is from 0 to 1. I correct for this in the projection matrix. See updated answer. Note that with or without this correction did not make a difference. – Selmar Oct 14 '18 at 21:06
  • Ah ok, well, if you don't have to match some output produced by the rasterization pipeline, then you could simply skip the matrix stuff altogether and compute the ray setup more directly, e.g., using the eye position and base vectors of the camera system. That would most likely be a little bit more efficient. Question is, however, whether a couple floating point operations less on the one-time ray setup will really make a measurable difference when we're talking about raytracing in a fragment shader… – Michael Kenzel Oct 14 '18 at 21:07
  • 1
    The camera position in world space is not the translation of the view matrix. it is the translation of the inverse view matrix. - Note `glm::inverse(viewMatrix) * glm::vec4(0,0,0,1);`, `glm::vec4(0,0,0,1);` is the camera position in view space – Rabbid76 Oct 14 '18 at 21:10
  • @Rabbid76 I will take a good look at that. Does that mean I am building my view matrix incorrectly? – Selmar Oct 14 '18 at 21:13
  • @MichaelKenzel, I completely agree with you! However, I want to understand what I am not getting right. (It is compute, by the way). – Selmar Oct 14 '18 at 21:13
  • @Selmar The view matrix is that matrix which transforms form world space to view space. It is the inverse matrix of that matrix, which contains the orientation and position of the camera. `glm::lookAt`. Note it transforms from the world reference system to the camera reference system -> transformation by inverse camera orientation and position – Rabbid76 Oct 14 '18 at 21:15
  • 1
    Try `buffer.camPos = glm::inverse(viewMatrix)[3];` – Rabbid76 Oct 14 '18 at 21:20
  • I found the problem. Thanks a lot for pointing me in the right direction. I was building my view matrix incorrectly.. Oops. I'll post an answer. – Selmar Oct 14 '18 at 21:44

1 Answers1

1

Thanks to the comments I have found the problem. I was building my view matrix incorrectly, in the exact same way as in this post:

glm::inverse(glm::translate(glm::toMat4(Rotation), -Position));

This is equal to translating first and then rotating, which of course leads to something unwanted. In addition, the Position was negative and camPos was obtained using the last column of the view matrix instead of the inverse view matrix, which is wrong.

It was not noticable with my fractal raycaster simply because I never moved far away from the origin. That, and the fact that there is no point of reference in such an environment.

Selmar
  • 722
  • 1
  • 5
  • 12