0

I have a simple docker-compose.yml & associated Dockerfiles that give me a simple dev and prod environment for a nginx-uvicorn-django-postgres stack. I want to add an optional 'backup' container that just runs cron to periodically connect to the 'postgres' container.

# backup container - derived from [this blog][1]
ARG DOCKER_REPO
ARG ALPINE_DOCKER_IMAGE  # ALPINE
ARG ALPINE_DOCKER_TAG    # LATEST
FROM ${DOCKER_REPO}${ALPINE_DOCKER_IMAGE}:${ALPINE_DOCKER_TAG}

ARG DB_PASSWORD
ARG DB_HOST     # "db"
ARG DB_PORT     # "5432"
ARG DB_NAME     # "ken"
ARG DB_USERNAME # "postgres"

ENV PGPASSWORD=${DB_PASSWORD} HOST=${DB_HOST} PORT=${DB_PORT} PSQL_DB_NAME=${DB_NAME} \
    USERNAME=${DB_USERNAME}

RUN printenv

RUN  mkdir /output && \
     mkdir /output/backups  && \
     mkdir /scripts  && \
     chmod a+x /scripts
COPY ./scripts/ /scripts/
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/15min/${DB_NAME}_15
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/daily/${DB_NAME}_day
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/weekly/${DB_NAME}_week
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/monthly/${DB_NAME}_month

RUN apk update && \
    apk upgrade && \
    apk add --no-cache postgresql-client && \
    chmod a+x /etc/periodic/15min/${DB_NAME}_15 && \
    chmod a+x /etc/periodic/daily/${DB_NAME}_day && \
    chmod a+x /etc/periodic/weekly/${DB_NAME}_week && \
    chmod a+x /etc/periodic/monthly/${DB_NAME}_month

The django container is derived from the official Python image and connects (through psycopg2) with values (as ENV value) for host, dbname, username, password and port. The 'backup' container has these same values, but I get this error from the command line:

> pg_dump --host="$HOST" --port="$PORT" --username="$USERNAME" --dbname="$PSQL_DB_NAME"
pg_dump: error: could not translate host name "db" to address: Name does not resolve

Is Alpine missing something relevant that is present in the official Python?

Edit: I am running with a system of shell scripts that take care of housekeeping for different configurations. so

> ./ken.sh dev_server

will set up the environment variables and then run docker-compose for the project and the containers docker-compose.yml doesn't explicitly create a network.

I don't know what "db" should resolve to beyond just 'db://'? - its what the django container gets and it is able to resolve a connection to the 'db' service.

service:
  db:
    image: ${DOCKER_REPO}${DB_DOCKER_IMAGE}:${DB_DOCKER_TAG} #postgres: 14
    container_name: ${PROJECT_NAME}_db
  volumes:
    - pgdata:/var/lib/postgresql/data
  environment:
    - PGPASSWORD
    - POSTGRES_DB=${DB_NAME}
    - POSTGRES_USER=${DB_USERNAME}
    - POSTGRES_PASSWORD=${DB_PASSWORD}
command: ["postgres", "-c", "log_statement=all"]
healthcheck:
  test: ["CMD-SHELL", "pg_isready -U postgres -h db"]
  interval: 2s
  timeout: 5s
  retries: 25

This is the 'dev_server' script run by the parent ken.sh script

function dev_server() {
    trap cleanup EXIT
    wait_and_launch_browser &

    docker-compose -p "${PROJECT_NAME}" up -d --build db nginx web pgadmin backup

    echo "Generate static files and copy them into static and file volumes."
    source ./scripts/generate_static_files.sh

    docker-compose -p "${PROJECT_NAME}" logs -f web nginx backup
}

Update: Worked through "Reasons why docker containers can't talk to each other" and found that all the containers are on a ken_default network, from 170.20.0.2 to 170.20.0.6.

I can docker exec ken_backup backup ken_db -c2, but not from db to backup, because the db container doesn't include ping.

From a shell on backup I cannot ping ken_db - ken_db doesn't resolve, nor does 'db'.

I can't make much of that and I'm not sure what to try next.

Atcrank
  • 439
  • 3
  • 11
  • Alpine images are small precisely because they don't have libraries and tools with extended features, but that shouldn't affect Docker networking at all. How are you actually running the containers? Who or what runs this `pg_dump` command, and what should the `db` host name resolve to and why? – David Maze Jan 31 '23 at 02:42
  • For a container to look up another container by name, it must be attached to the same Docker network. If you're setting things up with `docker-compose` this happens automatically, but if you're running `docker run` command lines, you need to (a) create a network, (b) ensure the postgres container is attached to that network, and then (c) do the same for the backup container. – larsks Jan 31 '23 at 02:55
  • I've made some edits to clarify: running this with docker-compose; the pg_dump command is to be run through a shell script by cron in the alpine-based backup container, with the credentials as environment variables in the container. – Atcrank Jan 31 '23 at 03:14

1 Answers1

0

You are running the backup container as a separate service.

Docker-compose creates a unique network for each service (docker-compose.yml file).

You need to get the DB and your backup container on the same docker network.

See this post

s_qw23
  • 354
  • 2
  • 11
  • I don't think this is the answer. I worked through this set of tips and can show that all the containers are on the 'ken_default' network, with sensible ip addresses. https://maximorlov.com/4-reasons-why-your-docker-containers-cant-talk-to-each-other/ The answer you've linked is for someone with two docker-compose.yml files that needed to tell docker they have a network in common. – Atcrank Jan 31 '23 at 23:09
  • After your edit I saw that you ping db. The container name is ${PROJECT_NAME}_db. So you should ping ${PROJECT_NAME}_db from backup – s_qw23 Feb 01 '23 at 18:46
  • thanks, yes. I had tried it both ways and thought that the 'db' not working was more meaningful. Neither 'db' nor 'ken_db' resolves from inside the 'backup' container (from the shell or as a shell script); both resolve when using the 'docker exec ken_backup ping' form. – Atcrank Feb 01 '23 at 23:03