No. First and formost you can't pickle modules, you'll get an error:
>>> import pickle, re
>>> pickle.dump(re, open('/tmp/re.p', 'wb'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
_pickle.PicklingError: Can't pickle <class 'module'>: attribute lookup module on builtins failed
More conceptually even if you could serialize a module, you'd only be increasing the amount of work Python has to do.
Normally, when you say import module
, Python has to:
- Find the location of the module (usually a file on the file system)
- Parse the source code into byte code in memory (and if possible store that parsed byte code as a
.pyc
file), or load a .pyc
into memory directly if one exists
- Execute any code that is supposed to run when the module first loads
If you were to pickle a module in some way, you would essentially be replacing step 2 with your own half-baked solution.
- Find the location of the pickle (usually a file on the file system)
- Unpickle it back into a Python module
- Execute any code that is supposed to run when the module first loads
We can safely assume that unpickling would be slower than Python's built-in bytecode format, because if it weren't Python would use pickling under the covers anyways.
More to the point, parsing a Python file is not (very) expensive, and will hardly take any time at all. Any real slowdown would occur in step 3, and we haven't changed that. You might be asking if there's some way to skip step three with pickling, but in the general case no, that is not possible, because there's no way to guarantee that a module doesn't make changes to the rest of the environment.
Now you might know something special about the Shapely module in particular that lets you say "all the work Shapely does when imported could safely be cached between runs". In that case the right course of action is to contribute such caching behavior to the library and cache the data Shapely is loading, not the code Python is importing.