0

I'm coding raytracer for linux terminal on C++, first I decided to describe the sphere, here is class and algorithm:

class Sphere
{
public:
    float radius;
    vector3 center;
    
    bool is_intersect(vector3 camera, vector3 ray)
    {
        // vector from center to camera
        vector3 v = center - camera;

        // module of vector
        float abs_v = v.length();

        // ray must be normalized (in main)
        float pr_v_on_ray = ray.dot_product(v);

        float l2 = abs_v * abs_v - pr_v_on_ray * pr_v_on_ray;

        return l2 - radius * radius <= 0;
    }
};

algorithm

vector2 and vector3 is self-written types for 2D and 3D vectors with all standard vectors operations (like normalization, module, dot product and another).

I'm creating sphere with center(0,0,0) and some Radius and all work:

 // because terminal pixels not square
    float distortion = (8.0 / 16) * (width / height);

    Sphere sphere = {0.5, vector3(0,0,0)};

    for (int i = 0; i < width; ++i)
    {
        for (int j = 0; j < height; ++j)
        {
            vector2 xy = (vector2((float)i, (float)j) / vector2(width, height)) 
                * vector2(2,2) - vector2(1,1); // x,y Є [-1.0; 1.0]
            xy.x *= distortion;

           vector3 camera = vector3(0,0,1);
           // ray from camera
           vector3 ray = vector3(xy.x, xy.y, -1).normalize();

           if (sphere.is_intersect(camera, ray)) mvaddch(j,i, '@');

result1-ok

But, when i change coordinates of center distortion appears:

Sphere sphere = {0.5, vector3(-0.5,-0.5,0)};

result2-distortion

  1. Do I understand correctly algorithm of ray "shot"? If i need to "shot" ray from (1,2,3) to (5,2,1) point, then ray coordinates is (5-1,2-2,1-3) = (4,0,-2)?

I understand ray.x and ray.y is all pixels on screen, but what about ray.z?

  1. I don't understand how camera's coordinates work. (x,y,z) is offset relative to the origin, and if i change z then size of sphere projection changes, its work, but if i change x or y all going bad. How to look my sphere from all 6 sides? (I will add rotation matrices if understand how to camera work)

  2. What causes distortion when changing the coordinates of the center of the sphere?

My final target is camera which rotate around sphere. (I will add light later)

Sorry for my bad English, thank you for your patience.

Thomas Dickey
  • 51,086
  • 7
  • 70
  • 105
  • Looking at your ray direction calculation, you have a very wide (90 degrees) field of view. This type of off-center distortion can be expected in planar projections. It is similar to fish-eye lensing. Experiment with narrowing the field of view by either scaling the `[-1, 1]` planar range of your ray to be smaller, or scaling the `-1` z-component of the ray to be larger. Either way, you'll need to move the camera further from the object as you reduce the field of view, if you want it to appear the same relative size. – paddy Nov 30 '21 at 20:50
  • okey, Im a little reduce FOV by change vector2 xy (/2) and move away camera (0,0,2) and its a little help. But im dont understand why ray.z is -1, what is this? As for me, ray from camera to x, y need to be: (x - camera.x, y - camera.y, -camera.z), is it wrong? – Владимир Лео Nov 30 '21 at 21:23
  • That is the z-direction of the ray. You have put your camera's focal point at (0,0,1) and are aiming it in the negative direction (_i.e._ back toward the origin looking toward -Z). That means the view frustum at the origin covers the rectangle defined by [-1, 1]. You are free to set up the frustum in any way you choose, but doing it this way happens to be a convenient construction. Once you have your ray direction, you can transform the camera origin anywhere you wish. Likewise you can transform the view direction to point anywhere. This is most easily done with a transformation matrix. – paddy Nov 30 '21 at 21:47
  • And if i change camera.x, for example (2,0,0) for look sphere from another side, all picture is breaks down, i guess now ray is (-camera.x, xy.x, xy.y) – Владимир Лео Nov 30 '21 at 21:50
  • That's causing the view to be sheared. You should separate the notion of the camera's position from the notion of a ray focal point. Make the focal point _always_ in the center and behind the view plane. Then, separately, you can choose your camera's position. As you discovered, collapsing these two separate concepts causes problems. For a more versatile view construction, read about affine transformations and perspective projection matrices. – paddy Nov 30 '21 at 21:55
  • Oh, I begin to understand. Thank you – Владимир Лео Dec 01 '21 at 00:38
  • using camera in ray tracing intersection predicates is not a good idea, better is to transform your ray into world global coordinates and use just ray (for speed sake) also see [GLSL 3D Mesh back raytracer](https://stackoverflow.com/a/45140313/2521214) especially the vertex shader where the transformation take place – Spektre Dec 01 '21 at 09:09

0 Answers0