I have to use the curve_fit numpy function over a large set of data (5 000 000). So basically I've created a 2D array. First dimension is the number of fittings to perform, second dimension is the number of points used for the fitting.
t = np.array([0 1 2 3 4])
for d in np.ndindex(data.shape[0]):
try:
popt, pcov = curve_fit(func, t, np.squeeze(data[d,:]), p0=[1000,100])
except RuntimeError:
print("Error - curve_fit failed")
multiprocessing can be used to speed up the full process, but it is still quite slow. Is there a way to use curve_fit in a "vectorized" manner?