I compared the speed of FFT using different methods in python and matlab, the results looked a little weird and I didn't know if I did it right. The code in python are as follows:
from scipy import fft
import pyfftw
import numpy as np
from timeit import Timer
a = pyfftw.empty_aligned((256, 256), dtype='complex128')
# fft using scipy
t = Timer(lambda: fft.fft2(a))
print('Time with scipy.fft: %1.3f seconds' % t.timeit(number=1000))
# fft using pyfftw improved scipy
t = Timer(lambda: pyfftw.interfaces.scipy_fft.fft2(a))
pyfftw.interfaces.cache.enable()
print('Time with pyfftw improved scipy: %1.3f seconds' % t.timeit(number=1000))
# fft using pyfftw
a = pyfftw.empty_aligned((256, 256), dtype='complex128')
b = pyfftw.empty_aligned((256, 256), dtype='complex128')
fft_object = pyfftw.FFTW(a, b)
ar = np.random.randn(256,256)
ai = np.random.randn(256,256)
a[:] = ar + 1j*ai
t = Timer(lambda: fft_object())
print('Time with pyfftw: %1.3f seconds' % t.timeit(number=1000))
with the outputs:
Time with scipy.fft: 1.416 seconds
Time with pyfftw improved scipy: 1.305 seconds
Time with pyfftw: 0.122 seconds
The code in matlab is as follows:
a = zeros(256,256);
a = a + i*a;
tic;
for n = 1:1000
fft2(a);
end
toc;
with the time cost 0.721065 s. The time costs are so different between pyfftw and scipy and between pyfftw and matlab. Did I conduct the comparison correctly and why the differences are so obvious?