2

Using Pipenv with Docker is causing some issues in my Django project.

I've installed Django locally with Pipenv which generates a Pipfile and Pipfile.lock. Then used startproject to start a new Django project.

Then I add a Dockerfile file.

# Dockerfile
FROM python:3.7-slim

ENV PYTHONUNBUFFERED 1

WORKDIR /code
COPY . /code

RUN pip install pipenv
RUN pipenv install --system

And a docker-compose.yml file.

# docker-compose.yml
version: '3'

services:
  web:
    build: .
    command: python /code/manage.py migrate --noinput && /code/manage.py runserver 0.0.0.0:8000
    volumes:
      - .:/code
    ports:
      - 8000:8000

And run docker-compose up --build to build the image and start the container. Everything works.

Now here's the issue...I want to add a new package, let's say psycopg2 so I can use PostgreSQL.

So...update my docker-compose.yml to add PostgreSQL.

# docker-compose.yml
version: '3'

services:
  db:
    image: postgres
    volumes:
      - postgres_data:/var/lib/postgresql/data/
  web:
    build: .
    command: python /code/manage.py migrate --noinput && /code/manage.py runserver 0.0.0.0:8000
    volumes:
      - .:/code
    ports:
      - 8000:8000
    depends_on:
      - db

volumes: postgres_data:

And update the DATABASE config in settings.py.

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'postgres',
        'USER': 'postgres',
        'HOST': 'db',
        'PORT': 5432
    }
}

Now if I install psycopg2-binary locally like pipenv install psycopg2-binary this "should" sync with Docker. But I get "No module named 'psycopg2'` errors".

Ok so maybe I need to install it directly within Docker:

$ docker-compose exec web pipenv install psycopg2-binary` 

Nope, same error.

Maybe I need to generate the lock file within Docker?

$ docker-compose exec web pipenv lock

Again no. So the issue is the state of Pipenv...I feel like I'm close but just not quite grasping something here.

Anyone see the error?

wsvincent
  • 207
  • 4
  • 13
  • did you run `docker-compose exec web pipenv install psycopg2-binary` to install the package and `docker-compose up` to start? – Siyu Nov 20 '18 at 20:11
  • Yes and then `docker-compose up`, no build needed. Still same error which is very strange to me. – wsvincent Nov 20 '18 at 20:14
  • Did you try to install psycopg2-binary in your Dockerfile and then rebuild with `docker-compose build web`? – Julio Daniel Reyes Nov 20 '18 at 20:37
  • Yes, but that didn't work for me either sadly. I think the issue is around not rebuilding the image when there is a local code change, like a new software package. Still trying to find the elegant solution... – wsvincent Nov 20 '18 at 20:47
  • try using `pipenv install --system --deploy --ignore-pipfile` https://stackoverflow.com/questions/46503947/how-to-get-pipenv-running-in-docker – MjZac Nov 21 '18 at 07:41
  • Thanks for that link. I've tried that too...same issues. I'm pretty sure the issue is related to the image: https://docs.docker.com/storage/storagedriver/#images-and-layers. Specifically image layers where the container layer is writable but the image is not. So running a command in the container won't persist on the base image. I can toss everything and rebuild the image from scratch which works but as to updates...I'm still figuring out the elegant approach. – wsvincent Nov 21 '18 at 15:13
  • In other words, with `volumes` in the `docker-compose.yml` filesystem changes sync between local and Docker. Adding a new software package needs to update the image. If I install it in Docker via a command that change will not persist. It will be in the container layer. The trick is to update the underlying image...so could add the software package to `Dockerfile` or could toss underlying image and start from scratch which "would" rebuild entire image including desired software package. Is there a way to do this without wiping entire initial image? – wsvincent Nov 21 '18 at 16:01
  • Why do not you want to rebuild the image? Here is Docker best practices list https://docs.docker.com/develop/develop-images/dockerfile_best-practices where you can find ways to optimize your Docker build process like multi-staging. However, your project is not big enough for optimization and it is easier to rebuild the image using command `docker-compose up --force-recreate` and let Docker manage it. – Alex Nov 21 '18 at 17:05
  • Cool let me check that out! I think the fear was around nuking the database (stored in the container) inadvertently, not necessarily the image. – wsvincent Nov 21 '18 at 17:22
  • one way is, you can add package to requirements.txt and run this command `docker-compose up --build` – Aleem Jul 01 '20 at 13:48

0 Answers0