2

I've created 3 vm using docker-machine:

docker-machine create -d virtualbox manager1
docker-machine create -d virtualbox worker1
docker-machine create -d virtualbox worker2

these are theirs ip:

docker-machine ls
NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
manager   -        virtualbox   Running tcp://192.168.99.102:2376                                   v1.12.6
worker1   -        virtualbox   Running   tcp://192.168.99.100:2376           v1.13.0-rc5  
worker2   -        virtualbox   Running   tcp://192.168.99.101:2376           v1.13.0-rc5   

Then docker-machine ssh manager1

and:

docker swarm init --advertise-addr 192.168.99.102:2377

then worker1 and worker2 join to the swarm.

Now i've created a overlay network as:

docker network create -d overlay skynet

and deployed a service in global mode (1 task for node):

docker service create --name http --network skynet --mode global -p 8200:80 katacoda/docker-http-server

And there is effectively 1 container (task) for node.

Now, i'd like accessing directly to my virtual host.. or, at least, i'd like browsing directly my service's container, because of i'd like developing a load balancer of my service with nginx. For doing that, in my nginx conf file, i'd like to point to a specific service container (i.e. now i have 3 node (1 manager and 2 workers) in global mode, so i have 3 tasks running-->i'd like to choose one of these 3 containers). How can i do that?

[edit]: i can point to my swarm nodes simply browsing to VM_IP:SERVICE_PORT, i.e: 192.168.99.102:8200, but there is still internal load balancing.

I was thinking that, if i point to a specific swarm node, i'll use container inside that specific node. But nothing, for now.

kenorb
  • 155,785
  • 88
  • 678
  • 743
pier92
  • 397
  • 1
  • 11
  • 26

2 Answers2

5

Adding to the answer @ben-hall provided above; Docker 1.13 will introduce an advanced syntax for the --publish flag, which includes a mode=host publish mode to publishing service ports (see the pull-request here: docker#27917, and docker#28943). Using this mode, ports of the containers (tasks) backing a service are published directly on the host they are running on, bypassing the Routing Mesh (and thus, load-balancer).

Keep in mind that as a consequence, only a single task of a service can run on a node.

On docker 1.13 and up; the following example creates a myservice service, an port 80 of the task is published on port 8080 of the node that the task is deployed on.

docker service create \
  --name=myservice \
  --publish mode=host,target=80,published=8080,protocol=tcp \
  nginx:alpine

Contrary to tasks that publish ports through the routing mesh, docker ps also shows the ports that are published for tasks that use "host-mode" publishing (see the PORTS column);

CONTAINER ID        IMAGE                                                                           COMMAND                  CREATED              STATUS              PORTS                           NAMES
acca053effcc        nginx@sha256:30e3a72672e4c4f6a5909a69f76f3d8361bd6c00c2605d4bf5fa5298cc6467c2   "nginx -g 'daemon ..."   3 seconds ago        Up 2 seconds        443/tcp, 0.0.0.0:8080->80/tcp   myservice.1.j7nbqov733mlo9hf160ssq8wd
thaJeztah
  • 27,738
  • 9
  • 73
  • 92
  • so @thaJeztah if i create N VM with docker-machine, within 1 manager and N-1 workers, and i run service in host mode.. i can access to tasks directly from browser i.e. browsing IP_VM:80, right? Am i secure that on i-th VM is running only i-th task? Thanks for your answer. – pier92 Jan 12 '17 at 11:43
  • Swarm will refuse to deploy a task on a host if port `80` is already in use. If you want such a service available on _all_ nodes, you can use `--mode=global` on the service, which will deploy 1 task per node. You can optionally limit that by setting a [constraint](https://docs.docker.com/engine/reference/commandline/service_create/#/specify-service-constraints---constraint) on the service (e.g. to prevent the service from being deployed on manager nodes) – thaJeztah Jan 13 '17 at 14:34
  • So, @thaJeztah, assuming that i have 3 vm created with docker-machine, i.e. with IP 192.168.99.100, 192.168.99.101, 192.168.99.102, and from manager i cmd: docker service create \ --name=myservice --mode=global\ --publish mode=host,target=80,published=8080,protocol=tcp \ nginx:alpine if i point from my browser to 192.168.99.100:80(or 192.168.99.101:80, or 192.168.99.102:80) i can access to my service container, right? I'm the guy you have answered on docker'issue on github. – pier92 Jan 13 '17 at 14:52
  • 1
    In this case you connect to `192.168.99.xx:8080`, but yes, you can connect to each host, and get access to the container, without going through the "routing mesh". This also means that if, for whatever reason, a container on a node fails / fails to start, that you cannot access the service on that IP-address. (something that is normally handled through the routing mesh / built-in load balancer) – thaJeztah Jan 15 '17 at 00:55
  • Ok, @thaJeztah. So in this case service's port is 80 and container's port (in host mode) is 8080. So if i would access directly to the service, i point to localhost:80, right? And if i would access to container, to :8080. I hope this is right for me. Thanks. – pier92 Jan 15 '17 at 08:39
2

Due to the way SwarmMode works with the IPVS Load Balancer (discussed at https://www.katacoda.com/courses/docker-orchestration/load-balance-service-discovery-swarm-mode), it's not possible to just access a single container deployed as a service.

Request for configuring the load balancer has an open Github issue at https://github.com/docker/docker/issues/23813

What you may find helpful is to use a proxy running on each node. This could be configured to only response to certain nodes request (in theory). Two which are designed around SwarmMode include:

https://github.com/vfarcic/docker-flow-proxy

https://github.com/tpbowden/swarm-ingress-router

Ben Hall
  • 1,927
  • 5
  • 25
  • 39