I am trying to speed up a command line Python script I wrote. In the process of profiling, I noticed that the time to load the various libraries I am importing at the top is very variable. The modules I am currently importing are:
import os
import urllib2
import datetime
import dateutil.rrule as rr
import netCDF4
from pylab import *
import glob
import cookielib
import traceback
import base64
from scipy import interpolate
import dateutil.parser as dparser
And when I time just those imports from the command line, I get very variable results, from minimum of ~2.7 seconds ( which seems pretty long already just to load the above-mentioned modules) to ~8 seconds
$ time python -c "import mycode.py"
real 0m2.763s
user 0m0.239s
sys 0m0.104s
$ time python -c "import mycode.py"
real 0m8.852s
user 0m0.224s
sys 0m0.157s
So, 2 questions:
- is there a way to reduce the time to perform those imports
- if not, is there a way to ensure the time to import remains the lowest of the above numbers