So I have this program that is looping through about 2000+ data files, performing a fourier transform, plotting the transform, then saving the figure. It feels like the longer the program runs, the slower it seems to get. Is there anyway to make it run faster or cleaner with a simple change in the code below?
Previously, I had the fourier transform defined as a function, but I read somewhere here that python has high function calling overhead, so i did away with the function and am running straight through now. Also, I read that the clf()
has a steady log of previous figures that can get quite large and slow down the process if you loop through a lot of plots, so I've changed that to close()
. Where these good changes as well?
from numpy import *
from pylab import *
for filename in filelist:
t,f = loadtxt(filename, unpack=True)
dt = t[1]-t[0]
fou = absolute(fft.fft(f))
frq = absolute(fft.fftfreq(len(t),dt))
ymax = median(fou)*30
figure(figsize=(15,7))
plot(frq,fou,'k')
xlim(0,400)
ylim(0,ymax)
iname = filename.replace('.dat','.png')
savefig(iname,dpi=80)
close()