2

I have a problem with rolling deployments of docker containers behind a load balancer.

Here is my docker compose yml file contents.

nginx:
    image: nginx_image
    links:
        - node1:node1
        - node2:node2
        - node3:node3
    ports:
        - "80:80"
node1:
    image: nodeapi_image
    ports:
        - "8001"
node2:
    image: nodeapi_image
    ports:
        - "8001"
node3:
    image: nodeapi_image
    ports:
        - "8001"

and here my nginx.conf

worker_processes 4;

events { worker_connections 1024; }

http {

  upstream node-app {
        least_conn;
        server node1:8001 weight=10 max_fails=3 fail_timeout=30s;
        server node2:8001 weight=10 max_fails=3 fail_timeout=30s;
        server node3:8001 weight=10 max_fails=3 fail_timeout=30s;
  }

  server {
        listen 80;
        listen 443 ssl;

        # ssl    on;
        ssl_certificate     /etc/nginx/ssl/imago.io.chain.crt;
        ssl_certificate_key /etc/nginx/ssl/imago.io.key;

        location / {
          proxy_pass http://node-app;
          proxy_http_version 1.1;
          proxy_set_header Upgrade $http_upgrade;
          proxy_set_header Connection 'upgrade';
          proxy_set_header Host $host;
          proxy_cache_bypass $http_upgrade;
        }
  }
}

If I have a new built image I want to deploy I have to stop a node container, remove it and recreate it with the new image. The problem here is that the new container will get a new IP and the nginx container doesnt know about that new IP, so if I recreate the 3 containers behind the load balancer once I recreate the last one the app wont serve any more because all IPs in the nginx machines /etc/hosts and environment vairables are not up to date any more.

I could SSH in to each container, update its code by pulling from the git repo and restart the process but that seems just wrong to me. What is the right way to do this?

08Dc91wk
  • 4,254
  • 8
  • 34
  • 67
aschmid00
  • 7,038
  • 2
  • 47
  • 66

1 Answers1

3

There is an easier way to achieve this, take the following docker-compose.yml file as an example :

lb:
    image: tutum/haproxy
    links:
        - app
    ports:
        - "80:80"
app:
    image: tutum/hello-world

This docker compose file describes two services :

  • lb: a load balancer which uses the tutum/haproxy image
  • app: a sample webapp listening on port 80

If you start those services naïvely with docker-compose up -d, you will end up with only 2 containers (the load balancer and the web app).

But if you run docker-compose scale app=3 then run again docker-compose up -d, you will end up with 4 load-balanced containers.

The key player here is the tutum/haproxy docker image which is able to discover the different containers it is linked to.


A similar solution is to use Jason Wilder's nginx-proxy image which has the advantage of discovering the new nodes live ; so you won't have to restart the lb service.

lb:
    image: jwilder/nginx-proxy
    volumes:
        - /var/run/docker.sock:/tmp/docker.sock:ro
    ports:
        - "80:80"
app:
    image: tutum/hello-world
    environment:
        VIRTUAL_HOST: www.mysite.com

The VIRTUAL_HOST environment variable must be set to the domain name that resolves to the IP address of your docker host.


Another one is to use Traefik

lb:
  image: traefik
  command: --docker
  ports:
    - "80:80"
  volumes:
    - /var/run/docker.sock:/var/run/docker.sock

app:
  image: tutum/hello-world
  labels:
    traefik.frontend.rule: Host:www.mysite.com

The traefik.frontend.rule label must define a Traefik rule set to the domain name that resolves to the IP address of your docker host.

Traefik also offers different load balancing strategies and circuit breakers.

Thomasleveil
  • 95,867
  • 15
  • 119
  • 113
  • ok i see. while i like your answer it seems like the tutum/haproxy container would work only if you use tutum. the other approach looks nice and i will give it a shot. in the meantime i wrote some code that does the trick by updating the /etc/hosts on the loadbalancer and reloads nginx or haproxy once done. – aschmid00 Aug 19 '15 at 14:48
  • the tutum/haproxy works everywhere. The difference is that it will update live only if hosted on tutum hosting service. – Thomasleveil Aug 19 '15 at 15:02
  • you just run `docker-compose up -d lb` and the load balancer container will be recreated and will notice the new app containers – Thomasleveil Aug 19 '15 at 16:27
  • ok but that defeats the purpose of having a running deployment without downtime. it basically equals to kill everything and recreate it. – aschmid00 Aug 19 '15 at 17:02
  • yes it does, short downtime, but still a downtime. If you use the solution with `jwilder/nginx-proxy`, there is no downtime – Thomasleveil Aug 19 '15 at 17:19
  • while i created a script to make this happen i accepted the answer. i am using joyent triton and i have to check if the nginx-proxy approach would work there. thx – aschmid00 Aug 21 '15 at 16:46
  • Both of these solutions require that all nodes and LB run on the same host. – allingeek Nov 10 '15 at 18:47
  • @allingeek not anymore, see https://docs.docker.com/engine/userguide/networking/get-started-overlay/ which is available since Docker 1.9 – Thomasleveil Nov 12 '15 at 10:54