95

I'm building an app running on NodeJS using postgresql. I'm using SequelizeJS as ORM. To avoid using real postgres daemon and having nodejs on my own device, i'm using containers with docker-compose.

when I run docker-compose up it starts the pg database

database system is ready to accept connections

and the nodejs server. but the server can't connect to database.

Error: connect ECONNREFUSED 127.0.01:5432

If I try to run the server without using containers (with real nodejs and postgresd on my machine) it works.

But I want it to work correctly with containers. I don't understand what i'm doing wrong.

here is the docker-compose.yml file

web:
  image: node
  command: npm start
  ports:
    - "8000:4242"
  links:
    - db
  working_dir: /src
  environment:
    SEQ_DB: mydatabase
    SEQ_USER: username
    SEQ_PW: pgpassword
    PORT: 4242
    DATABASE_URL: postgres://username:pgpassword@127.0.0.1:5432/mydatabase
  volumes:
    - ./:/src
db:
  image: postgres
  ports:
  - "5432:5432"
  environment:
    POSTGRES_USER: username
    POSTGRES_PASSWORD: pgpassword

Could someone help me please?

(someone who likes docker :) )

Andy
  • 17,423
  • 9
  • 52
  • 69
Stainz42
  • 963
  • 1
  • 6
  • 8

8 Answers8

169

Your DATABASE_URL refers to 127.0.0.1, which is the loopback adapter (more here). This means "connect to myself".

When running both applications (without using Docker) on the same host, they are both addressable on the same adapter (also known as localhost).

When running both applications in containers they are not both on localhost as before. Instead you need to point the web container to the db container's IP address on the docker0 adapter - which docker-compose sets for you.

Change:

127.0.0.1 to CONTAINER_NAME (e.g. db)

Example:

DATABASE_URL: postgres://username:pgpassword@127.0.0.1:5432/mydatabase

to

DATABASE_URL: postgres://username:pgpassword@db:5432/mydatabase

This works thanks to Docker links: the web container has a file (/etc/hosts) with a db entry pointing to the IP that the db container is on. This is the first place a system (in this case, the container) will look when trying to resolve hostnames.

Andy
  • 17,423
  • 9
  • 52
  • 69
  • 5
    Thanks Andy, that's i was seeking. Have a nice day ! – Stainz42 Oct 28 '15 at 19:59
  • 2
    i am a noob/beginner with postgres/node. where exactly do i need to do the change? – nerdess Mar 31 '16 at 17:59
  • @nerdess `DATABASE_URL` is in the question's `docker-compose.yml` - that's what needs changing. You reference it in your application code with `process.env['DATABASE_URL']`. – Andy Apr 01 '16 at 00:28
  • What if, on a Windows 10 machine, your Node app is running in Docker, but your Postgres is not? – Alessandro Aug 09 '18 at 15:28
  • 13
    @Andy changing that I get `Error: getaddrinfo ENOTFOUND db db:5432` – Dani Dec 13 '18 at 09:34
  • It helped me a lot! Thank you! – szczepaniakdominik Aug 12 '20 at 08:28
  • @Dani Did you modified the `docker-compose.yml` or another file? – mdmundo Apr 08 '21 at 15:21
  • Adding the `database container name` from `docker-compose.yml` as port worked for me as well. `localhost` is indeed not used when app is running on `Docker` environment. Thanks for your support. – Niyongabo Eric Jun 07 '21 at 18:00
  • For those of you who encountered the same issue that @Dani mentioned, make sure that your environment variables are configured corrently. For me, I'd misconfigured my Dockerfile so that Docker couldn't resolve the hostname – ziggahunhow Oct 11 '22 at 16:08
  • `docker compose build` helped my issue. `docker compose` doesn't rebuild your `Dockerfile` by default. See https://github.com/docker/compose/issues/1487. I was running an old build with `127.0.0.1` hardcoded. – Stokedout May 08 '23 at 19:47
  • 1
    The best answer out there. Maybe remember about adding container_name: in docker compose new versions to each container to be sure. – Hvitis Jun 27 '23 at 12:26
41

For further readers, if you're using Docker desktop for Mac use host.docker.internal instead of localhost or 127.0.0.1 as it's suggested in the doc. I came across same connection refused... problem. Backend api-service couldn't connect to postgres using localhost/127.0.0.1. Below is my docker-compose.yml and environment variables as a reference:

version: "2"

services:
  api:
    container_name: "be"
    image: <image_name>:latest
    ports:
      - "8000:8000"
    environment:
      DB_HOST: host.docker.internal
      DB_USER: <your_user>
      DB_PASS: <your_pass>
    networks: 
      - mynw

  db:
    container_name: "psql"
    image: postgres
    ports:
      - "5432:5432"
    environment:
      POSTGRES_DB: <your_postgres_db_name>
      POSTGRES_USER: <your_postgres_user>
      POSTGRES_PASS: <your_postgres_pass>
    volumes:
      - ~/dbdata:/var/lib/postgresql/data
    networks:
      - mynw
Abu Shumon
  • 1,834
  • 22
  • 36
6

I had two containers one called postgresdb, and another call node

I changed my node queries.js from:

const pool = new Pool({
    user: 'postgres',
    host: 'localhost',
    database: 'users',
    password: 'password',
    port: 5432,
})

To

const pool = new Pool({
    user: 'postgres',
    host: 'postgresdb',
    database: 'users',
    password: 'password',
    port: 5432,
})

All I had to do was change the host to my container name ["postgresdb"] and that fixed this for me. I'm sure this can be done better but I just learned docker compose / node.js stuff in the last 2 days.

Kiran Maniya
  • 8,453
  • 9
  • 58
  • 81
Philip Jay Fry
  • 105
  • 1
  • 6
4

If you send database vars separately. You can assign a database host.

DB_HOST=<POSTGRES_SERVICE_NAME> #in your case "db" from docker-compose file.
MEDZ
  • 2,227
  • 2
  • 14
  • 18
2

If none of the other solutions worked for you, consider manual wrapping of PgPool.connect() with retry upon having ECONNREFUSED:

const pgPool = new Pool(pgConfig);
const pgPoolWrapper = {
    async connect() {
        for (let nRetry = 1; ; nRetry++) {
            try {
                const client = await pgPool.connect();
                if (nRetry > 1) {
                    console.info('Now successfully connected to Postgres');
                }
                return client;
            } catch (e) {
                if (e.toString().includes('ECONNREFUSED') && nRetry < 5) {
                    console.info('ECONNREFUSED connecting to Postgres, ' +
                        'maybe container is not ready yet, will retry ' + nRetry);
                    // Wait 1 second
                    await new Promise(resolve => setTimeout(resolve, 1000));
                } else {
                    throw e;
                }
            }
        }
    }
};

(See this issue in node-postgres for tracking.)

leventov
  • 14,760
  • 11
  • 69
  • 98
  • Literally nothing worked until I tried this and it instantly worked! Thank You!!!! – adarian Mar 18 '20 at 00:10
  • For people coming to this error from Stephen Grider course of Docker, use this after PGClient connection in the server's index.js file. It works – Sandy M Jul 06 '23 at 05:26
1

As mentioned here.

Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.

It is important to note the distinction between HOST_PORT and CONTAINER_PORT. In the above example, for db, the HOST_PORT is 8001 and the container port is 5432 (postgres default). Networked service-to-service communication uses the CONTAINER_PORT. When HOST_PORT is defined, the service is accessible outside the swarm as well.

Within the web container, your connection string to db would look like postgres://db:5432, and from the host machine, the connection string would look like postgres://{DOCKER_IP}:8001.

So DATABASE_URL should be postgres://username:pgpassword@db:5432/mydatabase

mdmundo
  • 1,988
  • 2
  • 23
  • 37
0

I am here with a tiny modification about handle this.

As Andy say in him response.

  • "you need to point the web container to the db container's"

And taking in consideration the official documentation about docker-compose link's

  • "Links are not required to enable services to communicate - by default, any service can reach any other service at that service’s name."

Because of that, you can keep your docker_compose.yml in this way:

docker_compose.yml

version: "3"
services:
    web:
      image: node
      command: npm start
      ports:
         - "8000:4242"
      # links:
      #   - db
      working_dir: /src
      environment:
        SEQ_DB: mydatabase
        SEQ_USER: username
        SEQ_PW: pgpassword
        PORT: 4242
        # DATABASE_URL: postgres://username:pgpassword@127.0.0.1:5432/mydatabase
        DATABASE_URL: "postgres://username:pgpassword@db:5432/mydatabase"
      volumes:
          - ./:/src
    db:
      image: postgres
      ports:
          - "5432:5432"
      environment:
        POSTGRES_USER: username
        POSTGRES_PASSWORD: pgpassword

But it is a kinda cool way to be verbose while we are coding. So, your approach it is nice.

Franco Gil
  • 323
  • 3
  • 11
0

I was walking through page to find the solution of my problem, and this was the solution that I found:

Change this:

  db:
image: postgres:14.1-alpine
container_name: doca_db
ports:
  - "54320:5432" <---------- HERE
environment:
  POSTGRES_USER: user
  POSTGRES_PASSWORD: password
volumes: 
  - psql:/var/lib/postgresql/data:Z

to this:

  db:
image: postgres:14.1-alpine
container_name: doca_db
ports:
  - "5432:5432"
environment:
  POSTGRES_USER: user
  POSTGRES_PASSWORD: password
volumes: 
  - psql:/var/lib/postgresql/data:Z