24

I have a following setup in docker compose

  • nginx for proxying to frontend, backend and serving static content
  • backend app on port 8080 (spring boot)
  • frontend app on port 4000 (node for SSR)
  • mysql used by backend

Frontend can be updated relatively fast using

docker-compose up -d --no-deps frontend

Unfortunately backend takes about 1 minute to start.

Is there an easy way to achieve lower downtime without having to change the current setup too much? I like how simple it is right now.

I would imagine something like this:

  1. Start a new instance of backend
  2. Wait till it starts (it could be per timer or a healthtest)
  3. Close the previously running instance
CodeFox
  • 3,321
  • 1
  • 29
  • 41
Marcin Kunert
  • 5,596
  • 5
  • 26
  • 52

2 Answers2

17

Swarm is the right solution to go, but this is still painfully doable with docker-compose.

First, ensure your proxy can do service discovery. You can't use container_name (as you can't use it in swarm) because your will increase the number of container of the same service. Proxy like traefik or nginx-proxy uses labels to do this.

Then, docker-compose up -d --scale backend=2 --no-recreate this creates a new container with the new image without touching the running one.

After it's up and running, docker kill old_container, then docker-compose up -d --scale backend=1 --no-recreate just to reset the number.


EDIT 1

docker kill old_container should be docker rm -f old_container

EDIT 2

how to handle even and not even runs

We want to always kill the oldest containers

docker rm -f $(docker ps --format "table {{.ID}}  {{.Names}}  {{.CreatedAt}}" | grep backend | (read -r; printf "%s\n" "$REPLY"; sort -k 3 ) | awk -F  "  " '{print $1}' | head -1)
Siyu
  • 11,187
  • 4
  • 43
  • 55
  • I've tried this and unfortunately it doesn't work. Scaling up works perfectly, but killing the container causes problems. The service stops responding at all instead of forwarding the traffic to the running container. It is a shame that scaling down to 1 container always stops the newer container instead of the older one. – Marcin Kunert Jan 21 '19 at 11:40
  • @MarcinKunert I think the issue is on your nginx config. How do you config your reverse proxy? – Siyu Jan 21 '19 at 11:42
  • server { listen 80; location /api { proxy_pass http://backend:8080; } – Marcin Kunert Jan 21 '19 at 11:43
  • Seems that you are correct. I've looked a bit and: https://github.com/jwilder/nginx-proxy might be a way to go about it – Marcin Kunert Jan 21 '19 at 11:49
  • docker-compose does not provide a built-in load balancer, it won't redirect traffic to the new container automatically. That't why you have to have a proxy with service discovery. – Siyu Jan 21 '19 at 12:36
  • I'm now trying to restart the nginx container and it also seems to do the trick. Unfortunately during scaling back to 1 docker decides to kill the only one running container and start a new one. – Marcin Kunert Jan 21 '19 at 12:45
  • weird, if that running container was created with the updated image, `docker-compose up -d --scale backend=1 --no-recreate` should not kill it. – Siyu Jan 21 '19 at 12:48
  • Found the issue, I had not only to kill the previous container but also to remove it. The last (at least I hope so) issue now is to handle even and not even runs. Sometimes there will be backend_1 running at the start, but after the next deploy there will be backend_2 left. I've tried already with `docker rename` but it doesn't work and `docker-compose up` creates `backend_3` even though there is no `backend_2` – Marcin Kunert Jan 21 '19 at 12:59
  • 1
    You are reading my mind! I was looking how to kill the last container. – Marcin Kunert Jan 21 '19 at 13:21
  • 1
    It finally works. Instead of looking for the older container I've decided to save the id of the existing container before starting the script. I've also added a chance for the backend to exit gracefully `docker kill -s SIGTERM $PREVIOUS_CONTAINER`. Thanks for help! – Marcin Kunert Jan 21 '19 at 13:41
  • Did you find a way to restart the naming of the containers? I realised that there's the label `"com.docker.compose.container-number": "2"` which is probably the one that makes the `backend_*` to keep increasing after each deploy (even if they're renamed). But I'm not sure if it's possible to edit that label, any idea? – g-abello Apr 15 '20 at 14:47
12

Here is the script I've ended up using:

PREVIOUS_CONTAINER=$(docker ps --format "table {{.ID}}  {{.Names}}  {{.CreatedAt}}" | grep backend | awk -F  "  " '{print $1}')
docker-compose up -d --no-deps --scale backend=2 --no-recreate backend
sleep 100
docker kill -s SIGTERM $PREVIOUS_CONTAINER
sleep 1
docker rm -f $PREVIOUS_CONTAINER
docker-compose up -d --no-deps --scale backend=1 --no-recreate backend
docker-compose stop http-nginx
docker-compose up -d --no-deps --build http-nginx
Marcin Kunert
  • 5,596
  • 5
  • 26
  • 52
  • Why do you need to restart `http-nginx`? – Augustin Riedinger Sep 26 '19 at 15:10
  • 1
    @AugustinRiedinger without restarting nginx is unable to route the traffic to newly started backend. I've read about plugins to do it properly, but restarting was a lot easier for me – Marcin Kunert Sep 27 '19 at 08:35
  • @MarcinKunert this should solve the dns cache problem without restarting nginx https://serverfault.com/a/916786/130859 – Jacer Omri Feb 14 '20 at 15:50
  • 3
    A simpler way to get the ID of the currently running container for a service called `backend`, is to use `docker-compose ps -q backend`. Also, in case you have other Docker containers running containing that name, then it won't return their container IDs, it will just return the ID of the `backend` from your Docker Compose file :) – Bart van Oort Oct 10 '20 at 23:41
  • 1
    @Bart van Oort thanks for the tip! I'll try to incorporate it :) – Marcin Kunert Oct 12 '20 at 10:19