0

I have really simple web application contains those containers:

  • Frontend website (Nuxt.js - node app)
  • Backend API (PHP, Symfony)
  • MySQL

Every container has own Dockerfile and I can run it with Docker Compose together. It's really nice and I like the simplicity.

There is deploy script on my server. It clones GIT monorepo and run docker-compose:

DIR=$(dirname $(readlink -f $0))
rm -rf $DIR/app
git clone git@bitbucket.org:adam/myproject.git $DIR/app
cd $DIR/app && \
   docker-compose down --remove-orphans && \
   docker-compose up --build -d

But this solution is really slow and it makes ~3 minutes downtime. For this project I can accept few seconds downtime, it's not fatal. I don't need really zero downtime. But 3 minutes is not acceptable.

The most time-consuming is "npm build" inside containers. It's something which it must be run after every change.

What I can do better? Is Swarm or Kubernetes really only solution? Can I build containers while the old app still running? And after build just stop old and run new?

Thanks!

Adam
  • 15
  • 4
  • Which step is taking more time-repo cloning or restart(with docker-compose down and up)? If it is repo cloning then cloning it across servers in your cluster can help. And by mentioning down time do you mean that this application is already running and a new version takes ~3 minutes to deploy again or generally for a new deployment it takes that much time? – imharindersingh Jan 17 '20 at 17:13
  • How about downloading the repository as a .zip file (https://stackoverflow.com/a/50705461/977593) instead of git clone? When using git clone, it fetches all the git objects, so the more git objects you have the slower will be. But downloading it as .zip it does not include all the git database. – slackmart Jan 17 '20 at 17:19
  • Sorry, I did not specify: The worst is process of building frontend application inside of container (npm build). So the most-time consuming is something what can be prepare before switch. – Adam Jan 17 '20 at 17:29

4 Answers4

2

If you can structure things so that your images are self-contained, then you can get a fairly short downtime.

I would recommend using a unique tag for your images. A date stamp works well; you mention you have a monorepo, so you can use the commit ID in that repo for your image tag too. In your docker-compose.yml file, use an environment variable for your image names

version: '3'
services:
  frontend:
    image: myname/frontend:${TAG:-latest}
    ports: [...]
  et: cetera

Do not use volumes: to overwrite the code in your images. Do have your CI system test your images as built, running the exact image you're getting ready to deploy; no bind mounts or extra artificial test code. The question mentions "npm build inside containers"; run all of these build steps during the docker build phase and specify them in your Dockerfile, so you don't need to run these at deploy time.

When you have a new commit in your repo, build new images. This can happen on a separate system; it can happen in parallel with your running system. If you use a unique tag per image then it's more obvious that you're building a new image that's different from the running image. (In principle you can use a single ...:latest tag but I wouldn't recommend it.)

# Choose a tag; let's pick something based on a timestamp
export TAG=20200117.01

# Build the images
docker-compose build

# Push the images to a repository
# (Recommended; required if you're building somewhere
# other than the deployment system)
docker-compose push

Now you're at a point where you've built new images, but you're still running containers based on old images. You can tell Docker Compose to update things now. If you docker-compose pull images up front (or if you built them on the same system) then this just consists of stopping the existing containers and starting new ones. This is the only downtime point.

# Name the tag you want to deploy (same as above)
export TAG=20200117.01

# Pre-pull the images
docker-compose pull

# ==> During every step up to this point the existing system
# ==> is running undisturbed

# Ask Compose to replace the existing containers
# ==> This command is the only one that has any downtime
docker-compose up -d

(Why is the unique tag important? Say a mistake happens, and build 20200117.02 has a critical bug. It's very easy to set the tag back to the earlier 20200117.01 and re-run the deploy, so roll back the deployed system without doing a git revert and rebuilding the code. If you're looking at cluster managers like Kubernetes, the changed tag value is a signal to a Kubernetes Deployment object that something has updated, so this triggers an automatic redeployment.)

David Maze
  • 130,717
  • 29
  • 175
  • 215
0

Only problem really was docker-compose down before docker-compose build. I deleted down command and downtime is a few seconds now. I thought, build shutdowns running containers before building automatically. I don't know why. Thanks Noé for idea! I'm idiot.

Adam
  • 15
  • 4
0

While I do think that switching to Kubernetes (or maybe Docker Swarm which I don't have experience with) would be the best option, YES you can build your docker images and then restart.

You just need to run the docker-compose build command. See below:

DIR=$(dirname $(readlink -f $0))
rm -rf $DIR/app
git clone git@bitbucket.org:adam/myproject.git $DIR/app
cd $DIR/app && \
   docker-compose build && \
   docker-compose down --remove-orphans && \
   docker-compose up -d
leeman24
  • 2,729
  • 3
  • 29
  • 42
-1

This long time can come from multiple things:

  • Your application ignore the stop signal, docker-compose wait for them to terminate before killing them. Check that your container are well exiting without waiting the kill signal.
  • Your Dockerfile is wrongly ordered. Docker have built-in cache for every step but if an earlier step changed then it have do make every steps again. I recommend you to look carefuly when you copy files it's often this that break cache.
  • Run docker-compose build before putting down containers. Be careful about mounted volumes, if docker can't get the context it will failed
Noé
  • 498
  • 3
  • 14
  • Sorry, I did not specify: Killing isn't slow. The worst is process of building frontend application inside of container (npm build). So the most-time consuming is something what can be prepare before switch. – Adam Jan 17 '20 at 17:30
  • I know about ordering lines in Dockerfile and about caching. It's fine in my case I thing. There is problem in "npm build" - and it must be run after every change. – Adam Jan 17 '20 at 17:32
  • oh okay, then you can just run docker-compose build before your set of commands – Noé Jan 17 '20 at 17:37
  • Hey! Only problem really was `docker-compose down` before `docker-compose build`. I deleted `down` command and downtime is a few seconds now. I thought, `build` shutdowns running containers before building automatically. I don't know why. Thanks for your idea! I'm idiot. – Adam Jan 17 '20 at 18:12