3

I have a bit of a problem with connecting the dots.

I managed to dockerized our legacy app and our newer app, but now I need to make them to talk to one another via API call.

Projects:

  • Project1 = using project1_appnet (bridge driver)
  • Project2 = using project2_appnet (bridge driver)
  • Project3 = using project3_appnet (bridge driver)

On my local, I have these 3 projects on 3 separates folders. Each project will have their own app, db and cache services.

This is the docker-compose.yml for one of the project. (They have nearly all the same docker-compose.yml only with different image and volume path)

version: '3'
services:
  app:
    build: ./docker/app
    image: 'cms/app:latest'
    networks:
      - appnet
    volumes:
      - './:/var/www/html:cached'
    ports:
      - '${APP_PORT}:80'
    working_dir: /var/www/html
  cache:
    image: 'redis:alpine'
    networks:
      - appnet
    volumes:
      - 'cachedata:/data'
  db:
    image: 'mysql:5.7'
    environment:
      MYSQL_ROOT_PASSWORD: '${DB_ROOT_PASSWORD}'
      MYSQL_DATABASE: '${DB_DATABASE}'
      MYSQL_USER: '${DB_USER}'
      MYSQL_PASSWORD: '${DB_PASSWORD}'
    ports:
      - '${DB_PORT}:3306'
    networks:
      - appnet
    volumes:
      - 'dbdata:/var/lib/mysql'
networks:
  appnet:
    driver: bridge
volumes:
  dbdata:
    driver: local
  cachedata:
    driver: local

Question:

  • How can I make them be able to talk to one another via API call? (On my local for development and for prod environment)
  • On production, the setting will be a bit different, they will be in different machines but still in the same VPC or even through public network. What is the setting for that?

Note:

  • I have been looking at link but apparently it is deprecated for v3 or not really recommended
  • Tried curl from project1 container to project2 container, by doing:
root@bc3afb31a5f1:/var/www/html# curl localhost:8050/login
curl: (7) Failed to connect to localhost port 8050: Connection refused
rfpdl
  • 956
  • 1
  • 11
  • 35
  • Have you tried accessing them by their name as a hostname? So, for example, `app` would access `db` by using hostname `db` and port `3306`. – ahwayakchih Oct 06 '19 at 07:40
  • @ahwayakchih, I would like `app1` to be able to call to `app2` via api call, the `app1` can connect to their own `db` accordingly without any issue. – rfpdl Oct 06 '19 at 07:47
  • maybe something there will help https://stackoverflow.com/q/38088279/6352710 – ahwayakchih Oct 06 '19 at 07:50
  • especially this one: https://stackoverflow.com/a/48024244/6352710 – ahwayakchih Oct 06 '19 at 07:54
  • @ahwayakchih, it does not work: `Network test_network declared as external, but could not be found. Please create the network manually using `docker network create test_network` and try again.` – rfpdl Oct 06 '19 at 08:07
  • I guess it depends on the order of launching of instances. But in the question i linked to, there are other, older answers, that people seem to confirm as working. – ahwayakchih Oct 06 '19 at 08:10

4 Answers4

3

If your final setup will be that each service will be running on a physically different system, there aren't really any choices. One system can't directly access the Docker network on another system; the only way service 1 will be able to reach service 2 is via its host's DNS name (or IP address) and the published port. Since this will be different in different environments, I'd suggest making that value a configured environment variable.

environment:
  SERVICE_2_URL: 'http://service-2-host.example.com/' # default port 80

Once you've settled on that, you can use the same setup for a single-host deployment, mostly. If your developer systems use Docker for Mac or Docker for Windows you should be able to use a special Docker hostname to reach the other service

environment:
  SERVICE_2_URL: 'http://host.docker.internal:8082/'

(If you use Linux on the desktop you will have to know some IP address for the host; not localhost because that means "this container", and not the docker0 interface address because that will be on a specific network, but something like the host's eth0 address.)

Your other option is to "borrow" the other Docker Compose network as an external network. There is some trickiness if all of your Docker Compose setups have the same names; from some experimentation it seems like the Docker-internal DNS will always resolve to your own Docker Compose file first, and you have to know something like the Compose-assigned container name (which isn't hard to reconstruct and is stable) to reach the other service.

version: '3'
networks:
  app2:
    external:
      name: app2_appnet
services:
  app:
    networks:
      - appnet
      - app2_appnet
    environment:
      SERVICE_2_URL: 'http://app2_app_1/' # using the service-internal port
      MYSQL_HOST: db # in this docker-compose.yml

(I would suggest using the Docker Compose default network over declaring your own; that will mostly let you delete all of the networks: blocks in the file without any ill effect, but in this specific case you will need to declare networks: [default, app2_default] to connect to both.)

You may also consider a multi-host container solution when you're starting to look at this. Kubernetes is kind of heavy-weight, but it will run containers on any node in the cluster (you don't specifically have to worry about placement) and it provides both namespaces and automatic DNS resolution for you; you can just set SERVICE_2_URL: 'http://app.app2/' to point at the other namespace without worrying about these networking details.

David Maze
  • 130,717
  • 29
  • 175
  • 215
  • `http://host.docker.internal:{port}` solved my problem for local, and yes for production I will have to change it to the public IP of the servers. Thank you very much – rfpdl Oct 06 '19 at 11:42
0

If you run this docker compose locally; given app and db are on the same network - appnet - app should be able to talk to db using localhost:${DB_PORT}.

In production, if app and db are on different machines; app would probably need to talk to database using ip or domain name.

Ryan.Bartsch
  • 3,698
  • 1
  • 26
  • 52
  • I am not talking about `app` to `db`. But `app1` to `app2` which is in the same machine for development and it will be on a different machine for the production – rfpdl Oct 06 '19 at 07:41
  • @rfpdl have you tried `curl app2:80/login` from `app1` container? Because you seem to be setting it to listen on port `80`. – ahwayakchih Oct 06 '19 at 07:43
  • @ahwayakchih, tried just now, `root@bc3afb31a5f1:/var/www/html# curl cms_app_1:80/login curl: (6) Could not resolve host: cms_app_1` – rfpdl Oct 06 '19 at 07:46
  • do app1 and app2 both share the appnet network? If so, you should be able to use localhost, if not, but they're on the same docker instance, you can use the container name i.e. app2:{port} - as per @ahwayakchih above – Ryan.Bartsch Oct 06 '19 at 07:47
  • @rfpdl yeah, sorry, i only now noticed you want to connect form one docker-composed to another docker-composed project. Not from one of the instances to another inside the same composed project. Never tried that before :). – ahwayakchih Oct 06 '19 at 07:47
  • @ahwayakchih, no problem man. Thank you for taking time to help. – rfpdl Oct 06 '19 at 07:48
  • Still ok - if app1 and app2 containers are in same docker instance, but have different networks, you can connect using the container name. – Ryan.Bartsch Oct 06 '19 at 07:48
  • @Ryan.Bartsch, they are not in the same docker instance, they have different docker-compose as they are in different folders. – rfpdl Oct 06 '19 at 07:49
  • 1
    If containers share same docker instance and same docker network, they can communicate over localhost (using published or exposed ports). Same docker instance, but different docker network, they can use container name (with published or exposed ports). If they're in different docker instances (i.e. different VMs), you'll need to use ip/dns (with published ports) – Ryan.Bartsch Oct 06 '19 at 07:53
0

Considering that you are using different machines for the different docker deployments you good put them behind a regular webserver (Apache2, Nginx) and then route the traffic from the specific domain to $APP_PORT using a simple vhost. I prefer to do that instead of directly exposing the container to the network. This way you would also be able to host multiple applications on the same machine ( if you like to ). So I suggest you should not try to connect docker networks but "regular " ones.

dstrants
  • 7,423
  • 2
  • 19
  • 27
  • 1
    For now, I just need my local to work, where I can have both app do API with one another. I am still clueless on the setup for Docker on this. I do not understand what you are trying to point out though. – rfpdl Oct 06 '19 at 09:28
0

Was playing around with inspect and cURL. I think I found the solution.

Locally:

  • In my local, I inspected the container and view the NetworkSettings.Network.<network name>.Gateway which is 172.25.0.1
  • Then I get the the exposed port which is 8050
  • Then I did a curl inside the app1 container curl 172.25.0.1:8050/login to check whether app1 can do a http request to app2 container. OR docker exec -it project1_app_1 curl 172.25.0.1:8050/login
  • Vice versa, I did curl 172.25.0.1:80 for app2 -> app 1 OR docker exec -it project2_app_1 curl 172.25.0.1:80

The only issue is that, the Gateway value changes when we restart via docker-compose up -d

Production likewise:

I am not that pro with networking and stuff. My estimate for production would be:

Do curl app2-domain.com which is pointed to the app by the webserver as they are in their own machine (even with a load balancer).

rfpdl
  • 956
  • 1
  • 11
  • 35