It looks like calculating the cross-product of an array of vectors explicitly is a lot faster than using np.cross
. I've tried vector-first and vector-last, it doesn't seem to make a difference, though that was proposed in an answer to a similar question. Am I using it wrong, or is it just slower?
The explicit calculation seems to take about 60ns per cross-product on a laptop. Is that ~roughly~ as fast as it's going to get? In this case, there doesn't seem to be any reason to go to Cython or PyPy or writing a special ufunc
yet.
I also see references to the use of einsum, but I don't really understand how to use that, and suspect it is not faster.
a = np.random.random(size=300000).reshape(100000,3) # vector last
b = np.random.random(size=300000).reshape(100000,3)
c, d = a.swapaxes(0, 1), b.swapaxes(0, 1) # vector first
def npcross_vlast(): return np.cross(a, b)
def npcross_vfirst(): return np.cross(c, d, axisa=0, axisb=0)
def npcross_vfirst_axisc(): return np.cross(c, d, axisa=0, axisb=0, axisc=0)
def explicitcross_vlast():
e = np.zeros_like(a)
e[:,0] = a[:,1]*b[:,2] - a[:,2]*b[:,1]
e[:,1] = a[:,2]*b[:,0] - a[:,0]*b[:,2]
e[:,2] = a[:,0]*b[:,1] - a[:,1]*b[:,0]
return e
def explicitcross_vfirst():
e = np.zeros_like(c)
e[0,:] = c[1,:]*d[2,:] - c[2,:]*d[1,:]
e[1,:] = c[2,:]*d[0,:] - c[0,:]*d[2,:]
e[2,:] = c[0,:]*d[1,:] - c[1,:]*d[0,:]
return e
print "explicit"
print timeit.timeit(explicitcross_vlast, number=10)
print timeit.timeit(explicitcross_vfirst, number=10)
print "np.cross"
print timeit.timeit(npcross_vlast, number=10)
print timeit.timeit(npcross_vfirst, number=10)
print timeit.timeit(npcross_vfirst_axisc, number=10)
print all([npcross_vlast()[7,i] == npcross_vfirst()[7,i] ==
npcross_vfirst_axisc()[i,7] == explicitcross_vlast()[7,i] ==
explicitcross_vfirst()[i,7] for i in range(3)]) # check one
explicit
0.0582590103149
0.0560920238495
np.cross
0.399816989899
0.412983894348
0.411231040955
True