Still Unsolved - the issue lies in the paralaxtrace2 method, in updating the Ray object between propagating through both lenses
I'm trying to build a Raytracer with Python using two different classes: sphericalrefraction (containing plane, convex and concave lenses) and outputplane (containing just one infinitely large plane), inheriting from one class Optical. Under each of these classes, there are methods intercept (to calculate where a ray with a given direction and point intersects with the lens), and refract (calculating the new direction vector of the ray using the most recent direction vector of the ray). They also have propagate_ray methods in each of the optical elements that appends the newest point to the ray as the intercept() point, and the newest direction as the refract() direction.The ray merely has 1d arrays with x,y,z elements, one for point and one for direction e.g.
class Ray:
def __init__(self, p = [0.0, 0.0, 0.0], k = [0.0, 0.0, 0.0]):
self._points = [np.array(p)]
self._directions = [np.array(k)/np.sqrt(sum(n**2 for n in k))]
self.checklength()
def p(self):
return self._points[len(self._points)-1]
def k(self):
return self._directions[len(self._directions)-1]
class SphericalRefraction(OpticalElement):
def __init__(self, z0 = 0.0, c = 0.0, n1 = 1.0, n2 = 1.0, ar = 0.0):
self.z0 = z0
self.c = c
self.n1 = n1
self.n2 = n2
self.ar = ar
self.R = self.radius()
self.s = self.surface()
self.centre = self.centre()
def intercept(self, ray):
ar_z = np.sqrt(self.R**2 - self.ar**2)
#ar_z = distance from aperture radius z intercept to centre of sphere
r = ray.p() - self.centre
r_mag = np.sqrt(sum(n**2 for n in r))
rdotk = np.dot(r, ray.k())
if (rdotk**2 - r_mag**2 + self.R**2) < 0:
return None
else:
l1 = -rdotk + np.sqrt(rdotk**2 - r_mag**2 + self.R**2)
l2 = -rdotk - np.sqrt(rdotk**2 - r_mag**2 + self.R**2)
lplane = (self.z0 - ray.p()[2]) / ray.k()[2]
if self.s == "convex":
if (rdotk**2 - r_mag**2 + self.R**2) == 0:
if self.centre[2] - ar_z >= (ray.p() + -rdotk*ray.k())[2]:
return ray.p() + -rdotk*ray.k()
def refract(self, ray):
n_unit = self.unitsurfacenormal(ray)
k1 = ray.k()
ref = self.n1/self.n2
ndotk1 = np.dot(n_unit, k1)
if np.sin(np.arccos(ndotk1)) > (1/ref):
return None
else:
return ref*k1 - (ref*ndotk1 - np.sqrt(1- (ref**2)*(1-ndotk1**2)))*n_unit
def propagate_ray(self, ray):
if self.intercept(ray) is None or self.refract(ray) is None:
return "Terminated"
else:
p = self.intercept(ray)
k2 = self.refract(ray)
ray.append(p, k2)
return "Final Point: %s" %(ray.p()) + " and Final Direction: %s" %(ray.k())
When I pass two rays through 1 sphericalrefraction and one output plane, I use this method:
def paralaxtrace(self, Ray, SphericalRefraction, OutputPlane): SphericalRefraction.propagate_ray(self) SphericalRefraction.propagate_ray(Ray) OutputPlane.propagate_ray(self) OutputPlane.propagate_ray(Ray) self.plotparalax(Ray)
I get a graph looking like this, for example:
I've implemented this method to put it through two sphericalrefraction objects and one output plane, and for some reason it doesn't update between the sphericalrefraction elements?
def paralaxtrace2(self, Ray, sr1, sr2, OutputPlane):
sr1.propagate_ray(self)
sr1.propagate_ray(Ray)
sr2.propagate_ray(self)
sr2.propagate_ray(Ray)
OutputPlane.propagate_ray(self)
OutputPlane.propagate_ray(Ray)
self.plotparalax(Ray)
As you can see, the intercept/refract methods always use ray.p() so the newest point, but for some reason it doesn't actually append the new point/direction upon intersecting and refracting with the second spherical element? The graph looks exactly the same as the above one.
Am I passing objects wrongly? Is there another issue? If you need more of my code, please let me know, as I've put in the bare minimum to understand this issue.
Edit:
In the console:
>> import raytracer as rt
>> lense1 = rt.SphericalRefraction(50, .02, 1, 1.5168, 49.749)
>> lense2 = rt.SphericalRefraction(60, .02, 1, 1.5168, 49.749)
>> ray = rt.Ray([.1,.2,0],[0,0,1])
>> ray.paralaxtrace2(rt.Ray([-0.1, -0.2, 0],[0,0,1]), lense1, lense2, rt.OutputPlane(100))
x, y of 'ray' = [0.0, 50.000500002500019, 100.0] [0.20000000000000001, 0.20000000000000001, -0.13186017048586818]
x, y of passed ray: [0.0, 50.000500002500019, 100.0] [-0.20000000000000001, -0.20000000000000001, 0.13186017048586818]
For this, I get the graph above. What it should do, is since the second convex lens is at 60, it should converge the rays even more. Instead it looks like nothing happens.
Edit 2:
The issue doesn't seem to be the mutable default argument in the ray; I still get the same error. For some reason, it has more to do with adding another lens into the function as an argument. Between the propagation in each lens, it doesn't update the coordinates. Does this have to do with a mutable default argument error in the lens' class?