1

Docker Compose gives ModuleNotFoundError: No module named 'django' error. Docker Compose has not installed my pip installed packages pip install -r requirements.txt but running the image any other way shows they are installed and this issue is only with docker-compose, why?

Compose

version: '3.8'

services:
  web:
    build: ./
    user: python
    volumes:
      - ./:/app
    ports:
      - 8000:8000
    env_file:
      - ./.env.dev

Dockerfile

# Base image  
FROM python:3.9.6

ENV HOME=/app

# create directory for the app user
RUN mkdir -p $HOME

# set work directory
WORKDIR $HOME
 
# install psycopg2 dependencies
RUN apt-get update \
    && apt-get -y install libpq-dev gcc \
    && pip install psycopg2 \
    && apt-get -y install gunicorn3

RUN pip install --upgrade pip
ADD requirements*.txt .
RUN pip install -r requirements.txt
COPY python . .

ENTRYPOINT ["/app/entrypoint.sh"]

EXPOSE 8000

Problem:

I have created the following Dockerfile which runs in production and even runs locally outside of docker-compose without any issues i.e the following works with no errors docker run -p 8000:8000 web/lastest.

However, when I run this via docker-compose it fails to find my installed pip packages.

For example:

  • docker-compose build (successful)
  • docker-compose up

Error

web_1  | ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?
web_1  | [2022-01-04 14:55:05 +0000] [1] [INFO] Starting gunicorn 20.1.0
web_1  | [2022-01-04 14:55:05 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
web_1  | [2022-01-04 14:55:05 +0000] [1] [INFO] Using worker: sync
web_1  | [2022-01-04 14:55:05 +0000] [8] [INFO] Booting worker with pid: 8
web_1  | [2022-01-04 14:55:05 +0000] [8] [ERROR] Exception in worker process
web_1  | Traceback (most recent call last):
web_1  |   File "/usr/lib/python3/dist-packages/gunicorn/arbiter.py", line 589, in spawn_worker
web_1  |     worker.init_process()
web_1  |   File "/usr/lib/python3/dist-packages/gunicorn/workers/base.py", line 134, in init_process
web_1  |     self.load_wsgi()
web_1  |   File "/usr/lib/python3/dist-packages/gunicorn/workers/base.py", line 146, in load_wsgi
web_1  |     self.wsgi = self.app.wsgi()
web_1  |   File "/usr/lib/python3/dist-packages/gunicorn/app/base.py", line 67, in wsgi
web_1  |     self.callable = self.load()
web_1  |   File "/usr/lib/python3/dist-packages/gunicorn/app/wsgiapp.py", line 58, in load
web_1  |     return self.load_wsgiapp()
web_1  |   File "/usr/lib/python3/dist-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
web_1  |     return util.import_app(self.app_uri)
web_1  |   File "/usr/lib/python3/dist-packages/gunicorn/util.py", line 384, in import_app
web_1  |     mod = importlib.import_module(module)
web_1  |   File "/usr/lib/python3.9/importlib/__init__.py", line 127, in import_module
web_1  |     return _bootstrap._gcd_import(name[level:], package, level)
web_1  |   File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
web_1  |   File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
web_1  |   File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
web_1  |   File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
web_1  |   File "<frozen importlib._bootstrap_external>", line 790, in exec_module
web_1  |   File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
web_1  |   File "/app/app/wsgi.py", line 12, in <module>
web_1  |     from django.core.wsgi import get_wsgi_application
web_1  | ModuleNotFoundError: No module named 'django' 

Running which python outputs /usr/local/bin/python on both running the image directly and using docker-compose.

Running docker run -it 43d991d65c02 /bin/bash I can see and run Django. Only when running docker-compose is Django not installed, why?

MarkK
  • 968
  • 2
  • 14
  • 30
  • 3
    You override the `/app` directory in the container with a volume mount at runtime. – Turing85 Jan 04 '22 at 15:14
  • Well, that was silly of me. I didn't know volumes would clear installed packages in the docker OS system. Happy to except you answer – MarkK Jan 04 '22 at 15:16
  • Shouldnt be `Django` in the requirements.txt? – Beppe C Jan 04 '22 at 15:18
  • 1
    That depends where the packages are installed. If we mount a directory through a volume mount, the container-directory configured is "overriden". If this container-directory contains the installed modules, then yes, they are gone. Could you check whether a) removing the volume mount or b) re-running `pip install -r requirements.txt` on container-startup fixes the issue? – Turing85 Jan 04 '22 at 15:18
  • Thanks @Turing85 when I remove volumes this now works. I just wanted to make changes locally and update the running container. I guess I misunderstood what it does – MarkK Jan 04 '22 at 15:22

2 Answers2

2

In the containerfile presented, we work in the container-directory /app. But at runtime, we mount a volume to /app. Hence, all content that is generated during image build time that is stored in /app is overridden by the volume mount. If the dependencies at runtime were installed in /app, then they are overridden by the volume mount.

To fix this issue, two possibilities come to my mind:

  1. We can remove the volume mount. This will, however, devoid us of the capability of "hot reloading".

  2. We can re-run pip install -r requirements.txt at container startup, before starting the application. This would mean adding the line pip install -r requirements.txt to the entrypoint.sh-script.

Turing85
  • 18,217
  • 7
  • 33
  • 58
  • Is there a 3rd option to have requirements be installed in a different place during the docker build process and volume not overwrite it? Or maybe to run the docker-compose `command` to install but also run the CMD in the main docker file. I do not have the option of running pip install in the startup script this will add a lot of overhead – MarkK Jan 04 '22 at 15:28
  • I am no python developer and thus not familiar with pip. Have to pass on this one. From the looks of it `--target` is what you're looking for. See [this question](https://stackoverflow.com/questions/2915471/install-a-python-package-into-a-different-directory-using-pip) for details. But I do not know if your application will then automagically pick up the dependencies form the configured directory at startup. – Turing85 Jan 04 '22 at 15:28
  • I guess the better question for a 3rd option is, can I run docker-compose `command` and it still run the dockerfile CMD and I think you cannot have two – MarkK Jan 04 '22 at 15:31
  • Anyway, you have answered my original question so I will accept. Thank you for your help – MarkK Jan 04 '22 at 15:32
  • A container can have only one entrypoint, this is correct. If we want to execute multiple commands on container startup, we normally do this by executing a script at startup that then, in return, executes multiple commands sequentially. – Turing85 Jan 04 '22 at 15:33
  • 1
    You can change WORKDIR to another folder and then import requirements to your app from that location – Aleksey Vaganov Jan 04 '22 at 15:34
  • Isn't re-running `pip install -r requirements.txt` at each container startup consume additinional time? – alper May 24 '22 at 10:02
  • Yes it is. The setup in and of itself is strange. As I have commented, the `docker-compose.yml` overrides the `app` folder (and thus the previously fetched dependencies). I'd normally recommend to have a separate image for local development. – Turing85 May 24 '22 at 18:32
0

When docker-compose mounts the volume to /app folder its previous structure become hidden and new structure overrides previous one.

Aleksey Vaganov
  • 487
  • 3
  • 9