I'm currently using Jupyter notebooks with Python and pipenv virtual environments. I'm using the following code as a solution for loading the virtual environment inside my Jupyter notebook:
import sys
sys.path = ['./.venv/lib/python37.zip',
'./.venv/lib/python3.7',
'./.venv/lib/python3.7/lib-dynload',
'./.venv/lib/python3.7/site-packages',
] + sys.path
Is this bad practice? If so, what are the problems, risk, side-effects etc?
(Note this works because I've configured pipenv to store the environment in my working directory via echo "export PIPENV_VENV_IN_PROJECT=1" >> ~/.bash_profile
)
How did I get here?
My goal is to use Jupyter notebooks for numerous Python (and R) projects, each project with its own virtual environment. I've encountered two unsatisfactory solutions for achieving this goal:
Install Jupyter in every virtual environment.
- Slow installation process
- Requires configuration of extensions every time
- Wastes disk space as multiple versions are installed over time
- Fills virtual environment with irrelevant packages
Create an IPyKernel in every virtual environment.
- Requires installing IPyKernel in the virtual environment, which installs most of Jupyter anyway
- Again, filling the environment with irrelevant packages
- Requires an additional command to register the new kernel with Jupyter
- Requires the user to name each kernel manually
- Requires the user to select the appropriate Kernel each time they open a notebook
My third solution is to change the above configuration for pipenv, so the virtual environment is located in the working directory. This way, my generic kernel can use the same paths to target the appropriate virtual environment. I prepend the appropriate paths to the sys.path in Jupyter. Is this a terrible idea?