5

I have a webserver that requires websocket connection in production. I deploy it using docker-compose with nginx as proxy. So my compose file look like this:

version: '2'
services:
   app:
     restart: always

   nginx:
     restart: always
     ports:
       - "80:80"

Now if I scale "app" service to multiple instances, docker-compose will perform round robin on each call to the internal dns "app".

Is there a way to tell docker-compose load balancer to apply sticky sessions?

Another solution - is there a way to solve it using nginx?


Possible solution that I don't like:

multiple definitions of app

version: '2'
services:
   app1:
     restart: always

   app2:
     restart: always

   nginx:
     restart: always
     ports:
       - "80:80"

(And then on nginx config file I can define sticky sessions between app1 and app2).


Best result I got from searching: https://github.com/docker/dockercloud-haproxy

But this requires me to add another service (maybe replace nginx?) and the docs is pretty poor about sticky sessions there.

I wish docker would just allow configuring it with simple line in the compose file.

Thanks!

orshachar
  • 4,837
  • 14
  • 45
  • 68

1 Answers1

10

Take a look at jwilder/nginx-proxy. This image provides an nginx reverse proxy that listens for containers that define the VIRTUAL_HOST variable and automatically updates its configuration on container creation and removal. tpcwang's fork allows you to use the IP_HASH directive on a container level to enable sticky sessions.

Consider the following Compose file:

nginx:
  image: tpcwang/nginx-proxy
  ports:
    - "80:80"
  volumes:
    - /var/run/docker.sock:/tmp/docker.sock:ro
app:
  image: tutum/hello-world
  environment:
    - VIRTUAL_HOST=<your_ip_or_domain_name>
    - USE_IP_HASH=1

Let's get it up and running and then scale app to three instances:

docker-compose up -d
docker-compose scale app=3

If you check the nginx configuration file you'll see something like this:

docker-compose exec nginx cat /etc/nginx/conf.d/default.conf

...
upstream 172.16.102.132 {
    ip_hash;
            # desktop_app_3
            server 172.17.0.7:80;
            # desktop_app_2
            server 172.17.0.6:80;
            # desktop_app_1
            server 172.17.0.4:80;
}
server {
    server_name 172.16.102.132;
    listen 80 ;
    access_log /var/log/nginx/access.log vhost;
    location / {
        proxy_pass http://172.16.102.132;
    }
}

The nginx container has automatically detected the three instances and has updated its configuration to route requests to all of them using sticky sessions.

If we try to access the app we can see that it always reports the same hostname on each refresh. If we remove the USE_IP_HASH environment variable we'll see that the hostname actually changes, this is, the nginx proxy is using round robin to balance our requests.

agmangas
  • 639
  • 5
  • 9
  • Thanks for you answer! One should also mention the only downside of the ip_hash; sticky sessions. In a development environment, where the client is normally only the developers box, all the requests will reach the same server node. A solution could be to have a client in docker conainers, scaled to the same or larger amount of the server scaleing number... – andreas Jul 03 '17 at 08:23
  • What if you have replicas of nginx itself? I am trying to remove the single point of failure. – The Fool Oct 31 '20 at 11:43
  • Using ngnix sticky loadbalancer with ip_hash is not really a good idea in docker networks. nginx will see the internal docker network address of the host as incoming address, and since ip_hash uses the network address (not the ip address) of the incoming connection every connectiion will redirected to the same task of your target service, regardless of how many tasks you have running in parallel. – EnlightMe Feb 07 '21 at 18:56