I am writing some software in Python that would allow the user to run simulations that involve simple additions and multiplications and interpolation on an arbitrary number of dimensions. In the process, the function has to repeatedly (and sequentially) add, multiply, and interpolate on an N-dimensional grid. I have to do this in a loop since in each iteration the results are dependent on the previous iteration (they're not parallelizable). In the 1 dimensional case, it looks like
T = 1000000000
b = 0
for t in range(T):
b = b * f(b) + np.random.randn()
b = b + f(b)
where f
in the code above is an interpolant created by the data and grid supplied by the user. In each iteration, the same interpolant is used.
The first thing that comes to mind is to use numba
to speed up the loops. However, to be able to interpolate on an N dimensional grid, I have to use scipy.interpolate.RegularGridInterpolator
but using this would prevent me from running numba
in nopython
mode. I know that there is this package that allows faster interpolations, but I'm not sure if I can use this in the nopython
mode. Also, I'm wondering if there are other options to get over this bottleneck? Any help is greatly appreciated.