I have a docker-compose project with two containers running NGINX and gunicorn with my django files. I also have a database outside of docker in AWS RDS. My question is similiar to this one. But, that question is related to a database that is within docker-compose. Mine is outside.
So, if I were to open a bash terminal for my container and run py manage.py makemigrations
the problem would be that the migration files in the django project, for example: /my-django-project/my-app/migrations/001-xxx.py
would get out of sync with the database that stores which migrations has been applied. This will happen since my containers can shutdown and open a new container at any time. And the migration files would not be saved.
My ideas are to either:
Use a
volume
inside docker compose, but since the migrations folder are spread out over all django apps that could be hard to achieve.Handle migrations outside of docker, that would require some kind of "master" project where migration files would be stored. This does not seem like a good idea since then the whole project would be dependent on some locals file existing.
I'm looking for suggestions on a good practice how I can handle migrations.
EDIT:
Here is docker-compose.yml, I'm runing this locally with docker-compose up
and in production to AWS ECS with docker compose up
. I left out some aws-cloudformation config which should not matter I think.
docker-compose.yml
version: '3'
services:
web:
image: <secret>.dkr.ecr.eu-west-3.amazonaws.com/api-v2/django:${IMAGE_TAG}
build:
context: .
dockerfile: ./Dockerfile
networks:
- demoapp
environment:
- DEBUG=${DEBUG}
- SECRET_KEY=${SECRET_KEY}
nginx:
image: <secret>.dkr.ecr.eu-west-3.amazonaws.com/api-v2/nginx:${IMAGE_TAG}
build:
context: .
dockerfile: ./nginx.Dockerfile
ports:
- 80:80
depends_on:
- web
networks:
- demoapp