I've created 3 vm using docker-machine:
docker-machine create -d virtualbox manager1
docker-machine create -d virtualbox worker1
docker-machine create -d virtualbox worker2
these are theirs ip:
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
manager - virtualbox Running tcp://192.168.99.102:2376 v1.12.6
worker1 - virtualbox Running tcp://192.168.99.100:2376 v1.13.0-rc5
worker2 - virtualbox Running tcp://192.168.99.101:2376 v1.13.0-rc5
Then docker-machine ssh manager1
and:
docker swarm init --advertise-addr 192.168.99.102:2377
then worker1 and worker2 join to the swarm.
Now i've created a overlay network as:
docker network create -d overlay skynet
and deployed a service in global mode (1 task for node):
docker service create --name http --network skynet --mode global -p 8200:80 katacoda/docker-http-server
And there is effectively 1 container (task) for node.
Now, i'd like accessing directly to my virtual host.. or, at least, i'd like browsing directly my service's container, because of i'd like developing a load balancer of my service with nginx. For doing that, in my nginx conf file, i'd like to point to a specific service container (i.e. now i have 3 node (1 manager and 2 workers) in global mode, so i have 3 tasks running-->i'd like to choose one of these 3 containers). How can i do that?
[edit]: i can point to my swarm nodes simply browsing to VM_IP:SERVICE_PORT
, i.e: 192.168.99.102:8200
, but there is still internal load balancing.
I was thinking that, if i point to a specific swarm node, i'll use container inside that specific node. But nothing, for now.