1

This error is the same with ECONNREFUSED. But the implementation way is different then I will ask another question here.

Here is the docker-compose.yml file

version: '3'

services:
  server:
    build:
      context: .
    volumes:
      # Mounts the project directory on the host to /app inside the container,
      # allowing you to modify the code without having to rebuild the image.
      - .:/app
      # Just specify a path and let the Engine create a volume.
      # Data present in the base image at the specified mount point will be copied
      # over to the new volume upon volume initialization.
      # node_modules from this new volume will be used and not from your local dev env.
      - /app/node_modules/

    # Expose ports [HOST:CONTAINER}
    ports:
      - "4040:4040"

    # Set environment variables from this file
    env_file:
      - .env

    # Overwrite any env var defined in .env file (if required)
    environment:
      - NODE_ENV=development

    # Link to containers in another service.
    # Links also express dependency between services in the same way as depends_on,
    # so they determine the order of service startup.
    links:
      - postgres
  postgres:
    image: "postgres:9.6"
    ports:
      - "5432:5432"
    environment:
      POSTGRES_PASSWORD: 123456
      POSTGRES_USER: postgres
      POSTGRES_DB: postgres

Here is the database.json file which I used to store database information

{
"development": {
    "username": "postgres",
    "password": "123456",
    "database": "mydb",
    "host": "127.0.0.1",
    "dialect": "postgres",
    "pool": {
        "max": 100,
        "min": 0,
        "idle": 10000
    }
},
"test": {
    "username": "postgres",
    "password": "123456",
    "database": "mytestdb",
    "host": "127.0.0.1",
    "dialect": "postgres"
},
"production": {
    "username": "postgres",
    "password": "123456",
    "database": "mydb",
    "host": "127.0.0.1",
    "dialect": "postgres"
}
}

And use Sequelize to connect DB

import database from '../../config/database.json'

const sequelize = new Sequelize(dbConfig.database, dbConfig.username, dbConfig.password, dbConfig)

I know that when I run applications in containers they are not both on locahost then I have to change host, but how I can change here. I worked around by update host to postgres. It works but the solution is not what I want to find.

By the way, how can I create DB here.

postgres_1 | FATAL: database "starflow" does not exist

ivan.sim
  • 8,972
  • 8
  • 47
  • 63
Toan Tran
  • 1,937
  • 1
  • 24
  • 37
  • So what solution do you want ? Because change `host` to `postgres` is the best way – callmemath Aug 08 '17 at 17:46
  • Do I need to create a new environment for docker ?. Only want use development for running server on local. – Toan Tran Aug 08 '17 at 17:47
  • Docker runs in a new environment itself. And you need to configure that environment when container is running for the first time. – Ayushya Aug 08 '17 at 17:55

2 Answers2

3

There are two things you need to do. One is move your app on the network of the DB, that way DB will be available on host. This requires adding a network_mode to your service. See the updated yaml

version: '3'

services:
  server:
    build:
      context: .
    volumes:
      # Mounts the project directory on the host to /app inside the container,
      # allowing you to modify the code without having to rebuild the image.
      - .:/app
      # Just specify a path and let the Engine create a volume.
      # Data present in the base image at the specified mount point will be copied
      # over to the new volume upon volume initialization.
      # node_modules from this new volume will be used and not from your local dev env.
      - /app/node_modules/

    # Expose ports [HOST:CONTAINER}
    # ports:
    #   - "4040:4040"
    
    network_mode: service:postgres

    # Set environment variables from this file
    env_file:
      - .env

    # Overwrite any env var defined in .env file (if required)
    environment:
      - NODE_ENV=development

    # Link to containers in another service.
    # Links also express dependency between services in the same way as depends_on,
    # so they determine the order of service startup.
    links:
      - postgres
  postgres:
    image: "postgres:9.6"
    ports:
      - "5432:5432"
      - "4040:4040"
    environment:
      POSTGRES_PASSWORD: 123456
      POSTGRES_USER: postgres
      POSTGRES_DB: postgres

Note that ports are moved to the service which provides the network. We run the server service on postgres network. This way both can access each other on localhost and no change needed in your environment config.

This is recommended only in development or testing environment and not in production. So if you are developing docker deployment that would be used in production, DON'T use this approach

Next to customize the postgres image to create a different database follow the below documentation of the image

How to extend this image

If you would like to do additional initialization in an image derived from this one, add one or more *.sql, *.sql.gz, or *.sh scripts under /docker-entrypoint-initdb.d (creating the directory if necessary). After the entrypoint calls initdb to create the default postgres user and database, it will run any *.sql files and source any *.sh scripts found in that directory to do further initialization before starting the service.

For example, to add an additional user and database, add the following to /docker-entrypoint-initdb.d/init-user-db.sh:

#!/bin/bash
set -e

psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
    CREATE USER docker;
    CREATE DATABASE docker;
    GRANT ALL PRIVILEGES ON DATABASE docker TO docker;
EOSQL

For more details refer to https://hub.docker.com/_/postgres/

Community
  • 1
  • 1
Tarun Lalwani
  • 142,312
  • 9
  • 204
  • 265
0

Your host shouldn't have to change in different environments. It should be assigned the name of your pgsql service as defined in your docker-compose.yaml, in this case, it's postgres.

That said, if you are hoping to not have to hard-code any environment-specific parameters in your database.json file, you can split them up into different database.json files, and extends your docker-compose.yml with additional environment-specific compose file.

For example, you can split your database.json into db-dev.json, db-staging.json and db-prod.json.

Then you define environment-specific Compose files that mount the different files. For example,

# dbconfig-dev.yml
services:
  server:
      volumes:    
        - ./config/db-dev.json:/app/

# dbconfig-staging.yml
services:
  server:
      volumes:    
        - ./config/db-staging.json:/app/

# dbconfig-prod.yml
services:
  server:
      volumes:    
        - ./config/db-prod.json:/app/

Notice that these Compose files aren't full Compose definitions in that they only consist of the relevant volumes fragments.

Then you can extends your original docker-compose.yaml by doing:

$ docker-compose -f docker-compose.yaml -f dbconfig-dev.yaml up

You can read more about this in the Compose docs.

ivan.sim
  • 8,972
  • 8
  • 47
  • 63