I have a VM instance where I run the following Docker containers:
- django
- nginx
- postgres
- redis
My project is structured as thus:
project_root
|
|--- production.yml
|--- .envs
| |
| |---.production
| |
| |---.postgres
|
|---... more Django apps
|
|--- compose
|
|---production
|
|---- django
| |---- Dockerfile
| |---- entrypoint
| |---- start
|
|---- postgres
|---- Dockerfile
The /compose/production/django/Dockerfile
is as follows:
FROM python:3.6-alpine
ENV PYTHONUNBUFFERED 1
RUN apk update \
# psycopg2 dependencies
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
# Pillow dependencies
&& apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev \
# CFFI dependencies
&& apk add libffi-dev py-cffi \
# git
&& apk add --no-cache git
RUN addgroup -S django \
&& adduser -S -G django django
# Requirements are installed here to ensure they will be cached.
COPY ./requirements /requirements
RUN pip install --no-cache-dir -r /requirements/project.production.txt \
&& rm -rf /requirements
COPY ./compose/production/django/entrypoint /entrypoint
RUN sed -i 's/\r//' /entrypoint
RUN chmod +x /entrypoint
RUN chown django /entrypoint
COPY ./compose/production/django/start /start
RUN sed -i 's/\r//' /start
RUN chmod +x /start
RUN chown django /start
COPY . /app
RUN chown -R django /app
USER django
WORKDIR /app
ENTRYPOINT ["/entrypoint"]
/compose/production/django/entrypoint
is as follows:
#!/bin/sh
set -o errexit
set -o pipefail
set -o nounset
# N.B. If only .env files supported variable expansion...
export CELERY_BROKER_URL="${REDIS_URL}"
if [ -z "${POSTGRES_USER}" ]; then
base_postgres_image_default_user='postgres'
export POSTGRES_USER="${base_postgres_image_default_user}"
fi
export DATABASE_URL="postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}"
postgres_ready() {
python << END
import sys
import psycopg2
try:
psycopg2.connect(
dbname="${POSTGRES_DB}",
user="${POSTGRES_USER}",
password="${POSTGRES_PASSWORD}",
host="${POSTGRES_HOST}",
port="${POSTGRES_PORT}",
)
except psycopg2.OperationalError:
sys.exit(-1)
sys.exit(0)
END
}
until postgres_ready; do
>&2 echo 'Waiting for PostgreSQL to become available...'
sleep 1
done
>&2 echo 'PostgreSQL is available'
exec "$@"
/compose/production/django/start
is as follows:
#!/bin/sh
set -o errexit
set -o pipefail
set -o nounset
python /app/manage.py collectstatic --noinput
/usr/local/bin/gunicorn config.wsgi --bind 0.0.0.0:5000 --chdir=/app
The /.envs/.production/.postgres
is as follows:
# PostgreSQL
# ------------------------------------------------------------------------------
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_DB=some_db
POSTGRES_USER=super_user_123
POSTGRES_PASSWORD=not_gonna_tell_you
\production.yml
version: '3'
volumes:
production_postgres_data: {}
production_postgres_data_backups: {}
production_caddy: {}
services:
django:
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
image: project_production_django
depends_on:
- postgres
- redis
env_file:
- ./.envs/.production/.django
- ./.envs/.production/.postgres
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: project_production_postgres
volumes:
- production_postgres_data:/var/lib/postgresql/data
- production_postgres_data_backups:/backups
env_file:
- ./.envs/.production/.postgres
caddy:
build:
context: .
dockerfile: ./compose/production/caddy/Dockerfile
image: project_caddy
depends_on:
- django
volumes:
- production_caddy:/root/.caddy
env_file:
- ./.envs/.production/.caddy
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
redis:
image: redis:3.2
Everything works when I run the Docker containers in the VM instance on the cloud.
Objective
What happens is that I need to make the Django app access a remote database instead of the postgres
container on the docker network. That remote database is outside the VM instance where these Docker containers are residing.
What I tried
First, what I did was I set it up such that the VM instance (aka the docker host) can access this remote database via localhost:15432
Important note
This remote database is only accessible via reverse SSH hence, I need to use precisely only localhost:15432
in the VM instance
This is 100% working. I can access the remote database using that address and port from the VM instance aka the docker host
Then I needed to have the django container access the same. This is where I run into issues.
I tried the following changes inside the .postgres
file under .envs
. They all failed
- I tried to change
.postgres
such that I am using the IP of the docker host which I gather is typically172.17.0.1
. - I also tried to use
localhost
directly. - I researched and read about setting up the network as host instead of bridge. But I cannot understand how to make changes, so I didn't do anything beyond reading up on docker docs about this.
There's a chance I need to have the django
container be able to access BOTH the remote database and the postgres
container.
But for now, I am happy just to be able to directly have the django
container access the remote database without removing the postgres container totally.
How do I do so?