I have created a function to rotate a vector by a quaternion:
def QVrotate_toLocal(Quaternion,Vector):
#NumSamples x Quaternion[w,x,y,z]
#NumSamples x Vector[x,y,z]
#For example shape (20000000,4) with range 0,1
# shape (20000000,3) with range -100,100
#All numbers are float 64s
Quaternion[:,2]*=-1
x,y,z=QuatVectorRotate(Quaternion,Vector)
norm=np.linalg.norm(Quaternion,axis=1)
x*=(1/norm)
y*=(1/norm)
z*=(1/norm)
return np.stack([x,y,z],axis=1)
Everything within QuatVectorRotate is addition and multiplication of (20000000,1) numpy arrays
For the data I have (20million samples for both the quaternion and vector), every time I run the code the solution oscillates between a (known) correct solution and very incorrect solution. Never deviating from pattern correct,incorrect,correct,incorrect...
This kind of numerical oscillation in static code usually means there is an ill-conditioned matrix which is being operated on, python is running out of floating point precision, or there is a silent memory overflow somewhere.
There is little linear algebra in my code, and I have checked and found the norm line to be static with every run. The problem seems to be happening somewhere in lines a= ... to d= ...
Which led me to believe that given these large arrays I was running out of memory somewhere along the line. This could still be the issue, but I dont believe it is; I have 16gb memory, and while running I never get above 75% usage. But again, I do not know enough about memory allocation to definitively rule this out. I attempted to force garbage collection at the beginning and end of the function to no avail.
Any ideas would be appreciated.
EDIT:
I just reproduced this issue with the following data and same behavior was observed.
Q=np.random.random((20000000,4))
V=np.random.random((20000000,3))