10

docker-compose fails to build web component because it can not connect to the previously created db component

Mac OSX 10.13.6, conda 4.5.11, Python 3.6.8, Docker version 18.09.1, docker-compose version 1.23.2

django 1.8.3 gets installed with requirements.txt from Dockerfile. Not at liberty to upgrade.

Several very similar discussions on SO did not help (like this one: Docker-compose with django could not translate host name "db" to address: Name or service not known).

I have a docker-compose.yml with a network and two components:

version: '3'
networks:
  bridge:
   driver: bridge
services:
  db:
    image: postgres:10
    container_name: myapp-db
    volumes:
      - ./postgres_data:/var/lib/postgresql/data/
    ports:
      - "5432:5432"
    environment:
     POSTGRES_DB: actionability-master
     POSTGRES_PASSWORD: postgres
     POSTGRES_USER: postgres
    networks:
      - bridge


  web:
    restart: unless-stopped
    container_name: myapp-web
    build: .
    command: /start_gunicorn.sh
    ports:
      - "8080:8080"
    environment:
      PRODUCTION: 'true'
    networks:
      - bridge

In my settings.py I have DATABASES section:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': INSTANCE_NAME,
        'USER': 'postgres',
        'PASSWORD': 'postgres',
        'HOST': 'db',
        'PORT': '5432'
    },

}

When I run $ docker-compose up -d, the first image (db) gets created and its container gets started. Can see it's running and listening on port 5432 with docker ps and lsof . The same thing happens if I remove web: component from docker-compose.yml file

Now, the second component (web) has Dockerfile that contains these two lines (among many others):

RUN python manage.py makemigrations myapp
RUN python manage.py migrate

The "migrate" like dies with this error:

Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/django/db/backends/base/base.py", line 130, in ensure_connection
    self.connect()
File "/usr/local/lib/python3.6/site-packages/django/db/backends/base/base.py", line 119, in connect
    self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 176, in get_new_connection
    connection = Database.connect(**conn_params)
File "/usr/local/lib/python3.6/site-packages/psycopg2/__init__.py", line 130, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "db" (37.34.32.51) and accepting TCP/IP connections on port 5432?

I tired to several tweaks. * changed version '3.7' * added to db section

expose:
    - "5432"
  • added to web component:

depends_on: - "db"

  • added to web component:

links: - "db"

  • set PRODUCTION to false

environment: PRODUCTION: 'false'

  • changed HOST in settings.py to container name, to image name, to tags, to container id, to 'localhost', to '127.0.0.1', etc.. the error is the same, just mentioning the new HOST name instead of 'db'
  • ran outside of conda env
  • ran with --build switch ( docker-compose up -d --build )
  • did docker system prune and ran again

All the same error.


UPDATE: After a sugestion that it docker-compose does not work this way I tried to split it into two separate tasks. First I build myapp-db container and make sure it's running on the correct port :

$ docker container ps


CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                      NAMES
c110e8361cda        postgres:10         "docker-entrypoint.s…"   4 hours ago         Up 4 hours          0.0.0.0:5432->5432/tcp     myapp-db

Then I build myapp-web:

docker build -t myapp-web .

The same error still happens. So, why is it not finding the db container NOW?

User
  • 14,131
  • 2
  • 40
  • 59
SwissNavy
  • 619
  • 1
  • 12
  • 28
  • `the second component (web) has Dockerfile that contains these two lines`, is this a Dockerfile in the running web container, or the Dockerfile *of* the web container? – bluescores Jan 24 '19 at 15:25
  • 1
    Your RUN commands are always executed at BUILD time, not at run time. Therefore your database will not exist when you build the image since there is no database. So you can not do it this way – Sven Hakvoort Jan 24 '19 at 15:25
  • 1
    Thanks @SvenHakvoort, I think I got it. docker-compose does not work this way, the fact that db component is before the web container in the docker-compose.yml does not mean its container already up and running when web container is getting build, right? – SwissNavy Jan 24 '19 at 17:03
  • @bluescores it's Dockerfile of the web component that is getting its image built and its container started (if worked :-) ) by docker-compose – SwissNavy Jan 24 '19 at 17:11

3 Answers3

3

I use the depends_on list to start the db container before the web container and the links to ensure that the host names can be resolved.

I add the following to the web:

services:
  db:
    # ...
  web:
    links:
    - "db:db" # resolve the hostname "db" with the ip of the db container
    depends_on:
    - db # start db before web

Example

User
  • 14,131
  • 2
  • 40
  • 59
1

You can try this configuration with network aliases.

version: '3.5'
services:

  db:
    image: postgres:10
    container_name: myapp-db
    volumes:
      - ./postgres_data:/var/lib/postgresql/data/
    expose:
      - "5432"
    environment:
       POSTGRES_DB: actionability-master
       POSTGRES_PASSWORD: postgres
       POSTGRES_USER: postgres
    networks:
      services-network:
        aliases:
         - db

  web:
    restart: unless-stopped
    container_name: myapp-web
    build: .
    command: /start_gunicorn.sh
    ports:
      - "8080:8080"
    environment:
      PRODUCTION: 'true'
    depends_on:
      - db
    networks:
      services-network:
        aliases:
         - web

networks:
   services-network:
     name: services-network
     driver: bridge
Masht Metti
  • 178
  • 1
  • 10
0

I know this may seem really basic, but why not consider a modestly more sophisticated naming convention, and then simply add those names to your DNS service on the host, or at least the A records?

Then the system can reach the database by name or by IP. Just a thought.

Great choices and combination of platform and languages btw, Django + PostgreSQL are my favorite. I would also recommend Bootstrap, especially if you are looking for rapid tool deployment capabilities.

Fallenour
  • 62
  • 1
  • 10
  • you mean instead of 'db' and 'web' call them something more descriptive? Sorry I did not get how is it going to help the fact that the web does not see the db? `'HOST': 'db'` entry in the settings.py should be resolvable. – SwissNavy Jan 24 '19 at 18:02
  • 1
    The systems still have to be able to communicate with one another. Maybe its an IP issue, but a DNS based approach may resolve it. I have seen that before from other issues with docker in the past, including other posts here. – Fallenour Jan 24 '19 at 22:24
  • 1
    As an additional note, Id recommend making your databases physical nodes instead of docker containers. Databases experience a performance hit when dockerized. Itll be better for them performance wise long term if you simply make them physical clusters, and run them Master-Slave, or Master-Master. – Fallenour Jan 24 '19 at 22:25