0

I am following this course to learn computer graphics and write my first ray tracer.

I already have some visible results, but they seem to be too large.

The overall algorithm the course outlines is this:

Image Raytrace (Camera cam, Scene scene, int width, int height)
{
    Image image = new Image (width, height) ;
    for (int i = 0 ; i < height ; i++)
        for (int j = 0 ; j < width ; j++) {
            Ray ray = RayThruPixel (cam, i, j) ;
            Intersection hit = Intersect (ray, scene) ;
            image[i][j] = FindColor (hit) ;
        }
    return image ;
} 

I perform all calculations in camera space (where the camera is at (0, 0, 0)). Thus RayThruPixel returns me a ray in camera coordinates, Intersect returns an intersection point also in camera coordinates, and the image pixel array is a direct mapping from the intersectionr results.

The below image is the rendering of a sphere at (0, 0, -40000) world coordinates and radius 0.15, and camera at (0, 0, 2) world coordinates looking towards (0, 0, 0) world coordinates. I would normally expect the sphere to be a lot smaller given its small radius and far away Z coordinate.

enter image description here

The same thing happens with rendering triangles too. In the below image I have 2 triangles that form a square, but it's way too zoomed in. The triangles have coordinates between -1 and 1, and the camera is looking from world coordinates (0, 0, 4). enter image description here

This is what the square is expected to look like: enter image description here

Here is the code snippet I use to determine the collision with the sphere. I'm not sure if I should divide the radius by the z coordinate here - without it, the circle is even larger:

Sphere* sphere = dynamic_cast<Sphere*>(object);
float t;
vec3 p0 = ray->origin;
vec3 p1 = ray->direction;
float a = glm::dot(p1, p1);
vec3 center2 = vec3(modelview * object->transform * glm::vec4(sphere->center, 1.0f)); // camera coords
float b = 2 * glm::dot(p1, (p0 - center2));
float radius = sphere->radius / center2.z;
float c = glm::dot((p0 - center2), (p0 - center2)) - radius * radius;
float D = b * b - 4 * a * c;
if (D > 0) {
    // two roots
    float sqrtD = glm::sqrt(D);
    float root1 = (-b + sqrtD) / (2 * a);
    float root2 = (-b - sqrtD) / (2 * a);
    if (root1 > 0 && root2 > 0) {
        t = glm::min(root1, root2);
        found = true;
    }
    else if (root2 < 0 && root1 >= 0) {
        t = root1;
        found = true;
    }
    else {
        // should not happen, implies sthat both roots are negative
    }
}
else if (D == 0) {
    // one root
    float root = -b / (2 * a);
    t = root;
    found = true;
}
else if (D < 0) {
    // no roots
    // continue;
}
if (found) {
    hitVector = p0 + p1 * t;
    hitNormal = glm::normalize(result->hitVector - center2);
}

Here I generate the ray going through the relevant pixel:

Ray* RayThruPixel(Camera* camera, int x, int y) {
    const vec3 a = eye - center;
    const vec3 b = up;
    const vec3 w = glm::normalize(a);
    const vec3 u = glm::normalize(glm::cross(b, w));
    const vec3 v = glm::cross(w, u);
    const float aspect = ((float)width) / height;
    float fovyrad = glm::radians(camera->fovy);
    const float fovx = 2 * atan(tan(fovyrad * 0.5) * aspect);
    const float alpha = tan(fovx * 0.5) * (x - (width * 0.5)) / (width * 0.5);
    const float beta = tan(fovyrad * 0.5) * ((height * 0.5) - y) / (height * 0.5);

    return new Ray(/* origin= */ vec3(modelview * vec4(eye, 1.0f)), /* direction= */    glm::normalize(vec3( modelview * glm::normalize(vec4(alpha * u + beta * v - w, 1.0f)))));
}

And intersection with a triangle:

Triangle* triangle = dynamic_cast<Triangle*>(object);
// vertices in camera coords
vec3 vertex1 = vec3(modelview * object->transform * vec4(*vertices[triangle->index1], 1.0f)); 
vec3 vertex2 = vec3(modelview * object->transform * vec4(*vertices[triangle->index2], 1.0f));
vec3 vertex3 = vec3(modelview * object->transform * vec4(*vertices[triangle->index3], 1.0f));

vec3 N = glm::normalize(glm::cross(vertex2 - vertex1, vertex3 - vertex1));
float D = -glm::dot(N, vertex1);
float m = glm::dot(N, ray->direction);
if (m == 0) {
    // no intersection because ray parallel to plane
}
else {
    float t = -(glm::dot(N, ray->origin) + D) / m;
    if (t < 0) {
        // no intersection because ray goes away from triange plane
    }
    vec3 Phit = ray->origin + t * ray->direction;
    vec3 edge1 = vertex2 - vertex1;
    vec3 edge2 = vertex3 - vertex2;
    vec3 edge3 = vertex1 - vertex3;
    vec3 c1 = Phit - vertex1;
    vec3 c2 = Phit - vertex2;
    vec3 c3 = Phit - vertex3;
    if (glm::dot(N, glm::cross(edge1, c1)) > 0
            && glm::dot(N, glm::cross(edge2, c2)) > 0
            && glm::dot(N, glm::cross(edge3, c3)) > 0) {
        found = true;
        hitVector = Phit;
        hitNormal = N;
    }
}

Given that the output image is a circle, and that the same problem happens with triangles as well, my guess is the problem isn't from the intersection logic itself, but rather something wrong with the coordinate spaces or transformations. Could calculating everything in camera space be causing this?

user2566395
  • 95
  • 1
  • 8
  • 1
    What is the FOV value you are setting to the camera? – codetiger Jan 03 '23 at 03:57
  • hope `fovx,fovyrad` are both in [rad]... You can cross check the ray generation with my [raytrace through 3D mesh](https://stackoverflow.com/a/45140313/2521214) ... no you do not divide by `z` because the ray itself is already in correct direction so its pixel position is not changing with distance – Spektre Jan 03 '23 at 09:25
  • @codetiger for the sphere I'm using 60 degrees and for the red square it's 30 degrees – user2566395 Jan 04 '23 at 02:30
  • @Spektre Yes, fovy is in degrees and fovyrad is in radians. Since fovx is inferred from fovyrad, it should be in radians as well. – user2566395 Jan 04 '23 at 02:30
  • Interestingly for the red square example, if I tweak the camera FOV to 107.5 degrees, I get a result very close to the desired result at 30 degrees. Could this be because in the original solution they shoot the rays through the middle of the pixels? I haven't updated my code to do that yet, but I wouldn't expect such a big difference just from that. – user2566395 Jan 04 '23 at 02:40
  • @user2566395 too lazy to analyze your code but I do not think its related to pixel center position ... I would guess its more likely issue with camera focal point position like you have it shifted towards viewing direction or not at all instead of backwards meaning your objects are visualy closer by focal length or twice as much ... – Spektre Jan 04 '23 at 09:55

1 Answers1

1

I eventually figured it out by myself. I first noticed the problem was here:

return new Ray(/* origin= */     vec3(modelview * vec4(eye, 1.0f)),
               /* direction= */  glm::normalize(vec3( modelview *
                                glm::normalize(vec4(alpha * u + beta * v - w, 1.0f)))));

When I removed the direction vector transformation (leaving it at just glm::normalize(alpha * u + beta * v - w)) I noticed the problem disappeared - the square was rendered correctly. I was prepared to accept it as an answer, although I wasn't completely sure why.

Then I noticed that after doing transformations on the object, the camera wasn't positioned properly, which makes sense - we're not pointing the rays in the correct direction.

I realized that my entire approach of doing the calculations in camera space was wrong. If I still wanted to use this approach, the rays would have to be transformed, but in a different way that would involve some complex math I wasn't ready to deal with.

I instead changed my approach to do transformations and intersections in world space and only use camera space at the lighting stage. We have to use camera space at some point, since we want to actually look in the direction of the object we are rendering.

ruud
  • 743
  • 13
  • 22
user2566395
  • 95
  • 1
  • 8