1

Hello I have multiple projects that have there own dockerfiles and docker-compose.yml files. I am not too familiar on how I would setup the networking between these projects. So they could share the same databases and the project would be able to talk to on another. Does anyone have suggests?

Right now, In one of the projects I am just pulling in all the dockerfile into a docker-compose.yml and setting-up all the services I need from all the other projects in this yml file. I do not think this is ideal and there is a high level a coupling between the services.

version: "3"

services:
  db:
    image: mysql/mysql-server
    ports:
      - 3306:3306
  mongo: 
    image: mongo
    restart: always

  rails_app:
    build: 
        context: ${RAILS_APP_PATH}
        dockerfile: Dockerfile
    volumes:
      - ${RAILS_APP_PATH}:/application
    ports:
      - 4000:4000
    depends_on: 
      - db
      - mongo
    links: 
      - db
      - mongo

  frontend:
    build: 
      context: ${FRONTEND_PATH} 
    ports: 
        - ${EXPOSED_PORT}:${EXPOSED_PORT}
    depends_on:
      - go_services
    links:
      - go_services
  go_services:
    build: 
        context: . 
        dockerfile: Dockerfile
    ports: 
        - "8080:8080"
    depends_on: 
      - db
      - mongo
      - rails_app
    links: 
      - db
      - mongo
      - rails_app
Josh Martin
  • 311
  • 3
  • 18

2 Answers2

0

The trick is to use an External Docker Network. Set up the network and the Containers can talk to each other by their Service Names.

Setup the the network on the Host

docker network create my-net

First compose file

version: '3.9'

services:

  mymongo:
    image: mongo:latest
    restart: unless-stopped
    container_name: mongo
    environment:
      MONGO_INITDB_DATABASE: mymongo
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: password
    volumes:
      - ./database:/data/db
    ports:
      - "27017:27017"

networks:
  default:
    external: true
    name: my-net

Second compose file

version: '3.9'

services:

  ui:
    build:
      context: ./build
      dockerfile: Dockerfile_ui
    image: ui
    restart: "no"
    container_name: ui
    ports:
      - "8005:3000"
    command: ["npm", "start"]

networks:
  default:
    external: true
    name: my-net
BertC
  • 2,243
  • 26
  • 33
0

You can do this without any special Compose setup, if:

  • each project is self-contained (they do not share databases)
  • the service locations are configurable via environment variables
  • you don't mind communicating via the host

If you're thinking about scaling up this project at all, this approach can look attractive. It will work even if you're running each Compose file on a different host, and it translates well into clustered environments like Kubernetes.

Go ahead and break up your Compose file into several independent ones:

# rails/docker-compose.yml
version: '3.8'
services:
  db:
    image: mysql/mysql-server
  app:
    build: .
    ports: ['4000:4000']
    depends_on: [db]
# go/docker-compose.yml
services:
  mongo:
    image: mongo
  service:
    build: .
    ports: ['8080:8080']
    depends_on: [mongo]
    environment:
      - RAILS_APP_URL

The very last line here passes the RAILS_APP_URL environment variable from the host environment into the container.

You can start the Rails application independently:

docker-compose -f ./rails/docker-compose.yml up -d

You need to find some hostname where the container can call back to the host. On MacOS and Windows hosts, Docker provides a special hostname host.docker.internal for this. You can then connect the client container to the published port of its server:

export RAILS_APP_URL=http://host.docker.internal:4000
docker-compose -f ./go/docker-compose.yml up

If you're doing development, you can run the service you're working on locally, and its dependencies in containers, and point the environment variable at the container

go build -o ./server ./cmd/server
export RAILS_APP_URL=http://localhost:4000
./server

If you want to run this setup on multiple hosts but without using a dedicated cluster manager like Docker Swarm or Kubernetes, set the environment variable to point at the DNS name of the host running the service. If you did want to translate this to Kubernetes, a Helm "chart" would be analogous, containing the Deployment, Service, etc. and dependencies for a single component, and you could configure the other service's URL through Helm values.

David Maze
  • 130,717
  • 29
  • 175
  • 215