It depends on how complex the modules in question are, how many you're importing, etc.
In local tests, I copied collections.py
to my local directory, then tested with:
time python -B -E -S -c 'import collections; print(collections)'
to try to nail down the rough end-to-end cost of the collections
module alone without cached bytecode, then the same without -B
(so it would create and use bytecode caches). The difference was around 5 ms (36 ms with -B
, then 31 ms without -B
on second and subsequent runs).
For more fine-grained testing, explicitly compiling without invoking any other import machinery using ipython
%timeit
magic got:
>>> with open('collections.py') as f: data = f.read()
...
>>> %timeit -r5 compile(data, 'collections.py', 'exec', 0, 1)
100 loops, best of 5: 2.9 ms per loop
That's omitting some of the other import
machinery work, just recompiling over and over, and it runs ~3 ms, which seems reasonable. If you're importing a hundred source modules (not entirely unreasonable, counting all the cascading imports the handful of explicit imports trigger), saving 1-5 ms each can make a meaningful difference for short program runs.