4

I'm trying to create something like this:

Docker Network Architecture

The server containers each have port 8080 exposed, and accept requests from the client, but crucially, they are not allowed to communicate with each other.

The problem here is that the server containers are launched after the client container, so I can't pass container link flags to the client like I used to, since the containers it's supposed to link to don't exist yet.

I've been looking at the newer Docker networking stuff, but I can't use a bridge because I don't want server cross-communication to be possible. It also seems to me like one bridge per server doesn't scale well, and would be difficult to manage within the client container.

Is there some kind of switch-like docker construct that can do this?

Yousef Amar
  • 651
  • 6
  • 19
  • Could you explain a bit more why can't you use link? – Héctor Valverde May 18 '16 at 07:35
  • In addition, could you provide some examples such as docker-compose files or any other script you use to orchestrate all your containers? – Héctor Valverde May 18 '16 at 07:37
  • I can't use link because Servers are launched after the Client, so I can't link the Client to servers that don't exist yet, as the link info is passed on start, and you can't link retrospectively (related: http://stackoverflow.com/questions/25324860/how-to-create-a-bidirectional-link-between-containers). If it was one server, and many clients, I could pass link info to the clients, but I want the reverse. – Yousef Amar May 18 '16 at 08:24
  • I'm using Dockerode to interface with Docker through the Docker Remote API, but same goes for launching containers command line and using the flags. Right now I'm just using a network, but it's not ideal because I don't want the Servers connected to that network to be able to communicate with each other. I'm looking into iptables rules. – Yousef Amar May 18 '16 at 08:28
  • I don't understand why you need to "launch" the client containers before the server. But anyway, you could create the containers first and start them once the servers are available. Note that the `docker run` command is a sequence of `docker create` and `docker start` commands. – Héctor Valverde May 18 '16 at 08:32
  • That's just the nature of my application; the client runs forever, the servers come and go over time. I don't ever actually use `run` since it doesn't play well with private registries, but link/net info is passed through `start` anyway so it makes no difference. – Yousef Amar May 18 '16 at 08:45
  • I see ... do you think a sort of whitelist set in your servers could work? Also you can use bind ips and or ACL rules – Héctor Valverde May 18 '16 at 08:48
  • Actually, nvm my last statement, it can be passed in create too, but unfortunately that won't help because the servers need to be able to start long after the client. – Yousef Amar May 18 '16 at 08:48
  • I'd ideally like the servers to be "platform-independent", so any kind of whitelist would need to be outside, but even in that case it seems that the Docker side might get hairy (one network per server?). – Yousef Amar May 18 '16 at 08:51

2 Answers2

1

It seems like you will need to create multiple bridge networks, one per container. To simplify that, you may want to use docker-compose to specify how the networks and containers should be provisioned, and have the docker-compose tool wire it all up correctly.

Resources:


One more side note: I think that exposed ports are accessible to all networks. If that's right, you may be able to set all of the server networking to none and rely on the exposed ports to reach the servers.

Scott Swezey
  • 2,147
  • 2
  • 18
  • 28
0

Hope this is relevant to your use-case - I'm attempting to draw context regards your actual application from the diagram and comments. I'd recommend you go the Service Discovery route. It may involve a little bit of simple API over a central store (say Redis, or SkyDNS), but would make things simple in the long run.

Kubernetes, for instance, uses SkyDNS to do so with DNS. At the end of the day, any orchestration tool of your choice would most likely do something like this out of the box: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns

The idea is simple:

  • Use a DNS container that keeps entries of newly spawned servers
  • Allow the Client Container to query it for a list of servers. e.g. Picture a DNS response with a bunch of server-<<ISO Timestamp of Server Creation>>s
  • Disallow client containers read-access to this DNS (how to manage this permission-configuration without indirection, i.e. without proxying through an endpoint that allows writing into the DNS Container, but not reading, is going to exotic)

Bonus Edit: I just realised you can use a simpler Redis-like setup to do this, and that DNS might just be overengineering :)

Angad
  • 2,803
  • 3
  • 32
  • 45
  • How does this prevent servers from communicating with each other? I don't imagine you'd proxy all traffic through the DNS container? Why couldn't a server just scan for other servers by itself? – Yousef Amar May 24 '16 at 18:18