5

There are many questions that have been asked on here about similar issues that I went through such as this, this, this and this that are very similar but none of the solutions there solve my problem. Please don't close this question.

Problem:

I am running django with nginx and postgres on docker. Secret information is stored in an .env file. My postgres data is not persisting with docker-compose up/start and docker-compose down/stop/restart.

This is my docker-compose file:

version: '3.7'

services:
  web:
    build: ./app
    command: gunicorn umngane_project.wsgi:application --bind 0.0.0.0:8000
    volumes:
      - ./app/:/usr/src/app/
    expose:
      - 8000
    environment:
      - SECRET_KEY=${SECRET}
      - SQL_ENGINE=django.db.backends.postgresql
      - SQL_DATABASE=postgres
      - SQL_USER=${POSTGRESQLUSER}
      - SQL_PASSWORD=${POSTGRESQLPASSWORD}
      - SQL_HOST=db
      - SQL_PORT=5432
      - SU_NAME=${SU_NAME}
      - SU_EMAIL=${SU_EMAIL}
      - SU_PASSWORD=${SU_PASSWORD}
    depends_on:
      - db
  db:
    image: postgres:11.2-alpine
    volumes:
      - postgres_data:/var/lib/postgresql/data/
  nginx:
    build: ./nginx
    volumes:
      - static_volume:/usr/src/app/assets
    ports:
      - 1337:80
    depends_on:
      - web

volumes:
  postgres_data:
    external: true # I tried running without this and the result is the same
  static_volume:

My entrypoint scipt is this:

python manage.py flush --no-input
python manage.py makemigrations
python manage.py migrate
python manage.py createsuperuser --user "${SU_NAME}" --email "${SU_EMAIL}" --password "${SU_PASSWORD}"
python manage.py collectstatic --no-input

exec "$@"

where createsuperuser is a custom module that creates a superuser in the application.

This setup is not persisting the information in postgres_data.

Additional information:

Before doing anything, I check to see that there is no volume named postgres_data using docker volume ls and get just that.

At which point I run docker-compose up -d/docker-compose up -d --build and everything works out fine with no errors.

I run docker inspect postgres_data and it shows "CreatedAt": "X1"

I am able to login as the superuser. I proceed to create admin users, logout as the superuser and then login as any of the admin users with no problem. I run docker exec -it postgres_data psql -U <postgres_user> to make sure the admin users are in the database and find just that.

At which point I proceed to run docker-compose down/docker-compose stop with no problem. I run docker volume ls and it shows that postgres_data is still there.

I run docker inspect postgres_data and it shows "CreatedAt": "X2"

To test that everything works as expected I run docker-compose up -d/docker-compose up -d --build/docker-compose start/docker-compose restart.

I run docker inspect postgres_data and it shows "CreatedAt": "X3"

At which point I proceed to try and login as an admin user and am not able to. I run docker exec -it postgres_data psql -U <postgres_user> again but this time only see the superuser, no admin users.

(Explanation: I am here using the forward slash to show all the different things I tried on different attempts. I tried every combination of commands shown here.)

dot64dot
  • 531
  • 1
  • 6
  • 15
  • 1
    You have "python manage.py flush --no-input" in your entrypoint. Everytime the container recreates, this will run and remove all your data. Let me know if this solves your problem and I'll create an answer out of it. – Trent Mar 21 '19 at 22:52
  • instead of just putting "postgres_data", put the entire path of your host finishing with "postgres_data" – Felipe Toledo Mar 22 '19 at 02:17
  • @Trent that solved my problem – dot64dot Mar 22 '19 at 12:03
  • @FelipeToledo if i understand you correctly, you are saying i should create a local path as in the answer given by @Bogsan? – dot64dot Mar 22 '19 at 12:04
  • @dot64dot - added my comment as an answer. – Trent Mar 24 '19 at 04:27

2 Answers2

11

The issue is you run "flush" in your entrypoint script which clears the database. The entrypoint will run whenever you boot or recreate the container.

Trent
  • 2,909
  • 1
  • 31
  • 46
2

One way of having persistent data is specifying an actual path on the disk instead of creating a volume:

...
  db:
    image: postgres:11.2-alpine
    volumes:
      - "/local/path/to/postgres/data:/var/lib/postgresql/data/"
...

This way, the container's postgres data location is mapped to a path you specify. This way, the data persists directly on disk unless purposely deleted. A docker volume, as far as I know, is going to be removed on container removal.

Bogsan
  • 631
  • 6
  • 12
  • What are the security implications of this? If I spin my system up on a server does that mean I will have an extra point of failure? – dot64dot Mar 22 '19 at 12:08
  • As far as I'm concerned, as long as access to your server is protected, there shouldn't be any difference between this way or using volumes. An attacker who has access on your machine will be as able to read the database folder as he will to gain access of a container. Bottom line, I think this is as safe as your initial method, using volumes. – Bogsan Mar 22 '19 at 13:47
  • Thanks for the detailed response, much appreciated – dot64dot Mar 24 '19 at 17:40