I have a python codebase I'm using for research. The codebase imports libraries like numpy
and pytorch
, but also some custom tooling (i.e. other python packages I've written and want to use). These custom packages are installed using pip install -e .
into a virtual environment.
My workflow is such that I will launch a long-running job (a week or so) and then continue to modify or refactor the codebase in parallel. I'm becoming increasingly suspicious (paranoid?) that some (not all) of these modifications are changing runtime behavior.
Unfortunately, I have not been able to isolate this into a concrete example. Instead, I feel like python is gaslighting me with unexplainable results.
Is there something with python's garbage collector and working with editable installs where some modules are reloaded? Or maybe all modules are not loaded upfront?
I have explicitly seen this behavior when making large changes: Such as
- run "experiment" script
- "experiment" script imports package 'tools'
- While experiment script is running, update `tools` from v1.0 to v2.0
- Assert checker in "experiment" script that checks
tools.__version__ == v1.0 causes code to crash