I have a script that imports a module geometry
, and this module slows down my script to an extreme level. My script generates a bitmap and for 16 million pixels it would take 100+ hours
here the problematic module:
'''
Created on 2 fevr. 2014
@author: gary
'''
#module name is: geometry.py
import numpy as np
import numpy.linalg as la
import tetgen
def barycentric_coords(vertices, point):
T = (np.array(vertices[:-1])-vertices[-1]).T
v = np.dot(la.inv(T), np.array(point)-vertices[-1])
v.resize(len(vertices))
v[-1] = 1-v.sum()
#print vertices
return v
def tetgen_of_hull(points):
tg_all = tetgen.TetGen(points)
hull_i = set().union(*tg_all.hull)
hull_points = [points[i] for i in hull_i]
tg_hull = tetgen.TetGen(hull_points)
return tg_hull, hull_i
def containing_tet(tg, point):
for tet in tg.tets:
verts = [tg.points[j] for j in tet]
bcoords = barycentric_coords(verts, point)
if (bcoords >= 0).all():
return bcoords
return None, None
this is the stats cProfile gives on my script that uses the functions above, obviously that's where time's spent:
ncalls tottime percall cumtime percall filename:lineno(function)
1291716 45.576 0.000 171.672 0.000 geometry.py:10(barycentric_coords)
6460649 31.617 0.000 31.617 0.000 {numpy.core.multiarray.array}
2583432 15.979 0.000 15.979 0.000 {method 'reduce' of 'numpy.ufunc'
objects}
2031 12.032 0.006 193.333 0.095 geometry.py:26(containing_tet)
1291716 10.944 0.000 58.323 0.000 linalg.py:244(solve)
1291716 7.075 0.000 7.075 0.000 {numpy.linalg.lapack_lite.dgesv}
1291716 5.750 0.000 9.865 0.000 linalg.py:99(_commonType)
2583432 5.659 0.000 5.659 0.000 {numpy.core.multiarray._fastCopyAn
dTranspose}
1291716 5.526 0.000 7.299 0.000 twodim_base.py:169(eye)
1291716 5.492 0.000 12.791 0.000 numeric.py:1884(identity)
So here's my question:
numpy
seems to be quite slow at handling the calculation of barycentric coordinates here, would it be worth it to do it in c++
? Or would there be any way to optimize this in another way (in python) ?