Copying the virtual env you have created in your local filesystem doesn't guarantee that it will work in your Docker container. As @Zac Anger suggested, your machine and your Docker image may not share the same Python configuration, OS, etc. This also defeats the purpose of using Docker images to contain everything needed to run an application in an isolated filesystem.
Your best bet may be to keep using pip
, but increasing the timeout to an acceptable range in which your server can download all the required packages (e.g. 100
secs, default timeout is 15
secs). You can define this timeout in seconds by either:
- Setting the ENV variable
PIP_DEFAULT_TIMEOUT
3
- Adding the
--timeout
option when running pip install
4
Also, to speed up the build of your Docker image, you can use the Docker cache to prevent the perpetual re-execution of the dependency-installation, if your requirements.txt
hasn't changed.
Assuming your requirements.txt
file is located inside your build context root, you can first copy it to cache the installing step and ensure it only runs if it changes. Your Dockerfile can now look like this:
FROM python:3.10-slim
# Set pip default timeout to 100 secs, also disable version check
ENV PIP_DEFAULT_TIMEOUT=100 \
PIP_DISABLE_PIP_VERSION_CHECK=1
WORKDIR /app/
# copy project requirement files to ensure they will be cached
COPY ./requirements.txt /app/requirements.txt
# install packages with pip
RUN pip install -r /app/requirements.txt
# now copy your app's source code
COPY . /app
CMD ["python", "app.py"]
Some final notes, since you are using virtual envs in your local filesystem, consider implementing a .dockerignore
to prevent them from being copied into your image. If your virtual env is presumably named envcache
, create a .dockerignore
file in your build context root with this line:
envcache