1

We have been migrating part of our system over to a more microservice-orientated architecture. To run this, we have opted to run them as docker containers. Our architecture at the moment is as follows:

a) Several web servers, hosting the still monolithic PHP application

b) 3 new VM's, which will run

  • An "alerting" microservice
  • A mongo DB instance
  • A custom microservice registry (based on redis)

So, my problem is as follows:

Our current infrastructure is all in the 10.0.0.0/24 range. Docker spins up instances in the 172.x.1.x range. How do I get the web servers (running on 10.0.0.0/24) to connect to the services registered with the "registry", which is at 172.17.1.3 (for example)?

I've read up about lots of extensions, such as swarm, compose, etc. But those don't seem to solve the networking problem.

You might say "well you're already exposing the relevant port on the alerting service, just connect to that VM's IP address", but the problem is that when the service (i.e. the NodeJS application inside the docker container) starts up, it registers its exposed port with the service "registry". The registry uses the requesting IP address to build up a sort of path. So the service starts up, and gets registered in the "registry" as 172.17.1.5:3001. If this is the only way, is there not a way to get the services' host IP address?

Any suggestions? Hope this makes sense!

Thanks for any help!

iLikeBreakfast
  • 1,545
  • 23
  • 46

1 Answers1

0

The registry uses the requesting IP address to build up a sort of path. So the service starts up, and gets registered in the "registry" as 172.17.1.5:3001. If this is the only way, is there not a way to get the services' host IP address?

If those containers are running on the same host (running one docker daemon), you don't need a registry, as all containers sees each other through a common docker network (that docker-compose creates by default in its version 2)

If those containers are running on the different hosts (each running their own docker daemon), you need one more key-value store in order to enable docker container visibility across VMs, allowing you to resolve a container name to its right ip across hosts.
See "How to make Docker container accessible to other network machines through IP?" and this tutorial.

https://i.stack.imgur.com/ahLaf.png

That could replace your redis registry.

Community
  • 1
  • 1
VonC
  • 1,262,500
  • 529
  • 4,410
  • 5,250
  • Thanks for you answer! I might have mistyped, I didn't mean the docker instance gets registered. What I mean is that when the application running inside docker starts up, it registers itself with a "microservice registry". So from other clients, I can simply call {registry IP}/find?key=alerting, and that will return the IP and port where the client can access the "alerting" application. If that makes sense? Updated my question accordingly. – iLikeBreakfast Apr 05 '16 at 07:24
  • @iLikeBreakfast my answer stands. Registry or not, the key element here is: are your container running within one docker daemon on one machine, or are they running on different host, each one with their own docker daemon? – VonC Apr 05 '16 at 07:26
  • I would LIKE them to run on different daemons. I figure it'll be easier to scale if we just get this done right now. So 3 containers, running on 3 different VM's. Regardless, how would an outside application connect to the internal container network? Is that even possible? Having `10.0.0.2` connect to container `172.17.1.23` for example? – iLikeBreakfast Apr 05 '16 at 07:34
  • @iLikeBreakfast different daemon means the key-value store I describe. – VonC Apr 05 '16 at 07:45
  • @iLikeBreakfast "how would an outside application connect to the internal container network?" it uses the VM ip (the one where the container is running). The container must EXPOSE the ports, and the VM must port-forward that port (http://stackoverflow.com/a/36385476/6309) – VonC Apr 05 '16 at 07:51