I'm currently building a basic raytracing algorithm and need to figure out which system of handling the intersections would be best performance-wise.
In the method I'm checking for a intersection of the ray and the object I'm returning a struct with the distance of the ray traveled to the hit, the position vector of the hit and the normal vector or -1 for the distance if there is no intersection.
For the next step I have to find the shortest distance of all intersections and exclude the ones with a negative distance.
I even thought about having 2 structs, one with only negative distances and one full struct to reduce the amount of space needed, but thought this wouldn't really make a difference.
My options so far: first go over the array of the intersections and exclude the ones with negative distances, then find the shortest distance from the remainings via a sorting algorithm (probably insertion sort due to quick implementation).
Or put them together in one algorithm and test in each sort step if the distance is negative.
typedef Point3f float[3];
typedef struct {
float distance;
Point3f point;
Point3f normal;
} Intersection;
Intersection intersectObject (Ray-params, object) {
Intersection intersection;
//...
if (hit) {
intersection.distance = distance;
intersection.point = point;
intersection.normal = normal;
} else {
intersection.distance = -1.0f;
}
return intersection;
}
//loop over screen pixel
Intersection* intersections;
int amountIntersections;
//loop over all objects
//here I would handle the intersections
if (amountIntersections) {
//cast additional rays
}
I can't really figure out what would be the best way to handle this, since this would be called a lot of times. The intersection array will probably be a dynamic array with the amountIntersections as the length variable or an array with the most expected amount of intersections which then have intersections in it with negative distances.