I was trying to measure time taken to evaluate mathematical function, in both numpy way and basic pythonic way. First, I measured how much time it takes to evaluate function.
import numpy as np
import scipy as sp
import scipy.integrate as si
def func1(x):
return np.piecewise(x, [x<=2, x>2],
[lambda x: 2*x + pow(2, x),
lambda x: -(pow(x, 2) + 2)]
)
def func2(x):
if (x<=2): return 2*x + pow(2,x)
else: return -(pow(x, 2) + 2)
data = np.linspace(-10, 10, 1000)
%timeit data1 = func1(data)
data2 = []
%timeit for i in range(0, np.size(data)): data2.append(func2(data[i]))
Actually, the result above was as I expected, numpy way was much faster than the basic way.
35.2 µs ± 110 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
771 µs ± 10.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
However, strange thing happens in scipy.integrate.quad. The integration speed was much faster in basic pythonic way. Why is the case?
%timeit si.quad(func1, -10, 10)
%timeit si.quad(func2, -10, 10)
5.59 ms ± 25.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
187 µs ± 39.3 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)