3

I'm incredible confused between Docker Hub, Cloud, Swarm, Swarm Mode, docker deploy, docker-compose Deploy, ...

What is the simplest docker deployment practice for a production website that fits comfortably within the capabilities of a single physical server?

The site in question has a comprehensive docker-compose.yml that starts up some 12 services covering various web servers, webpack builders and DB. An environment variable is used to control for dev or production.

A command-line tool is used to upload Webpack bundles to S3 bucket, and sourcemaps to Sentry. The bundle hash is used as a release ID, which is stored in an environment variable (i.e. HTML is written with <script src="https://s3.site.com/c578f7cbbf76c117ca56/bundle.js"> where the hash c57... is written into the environment variable file pointed to by each service in docker-compose.yml).

I don't need more than one server, nor comprehensive failover strategies. I just want to avoid downtime when deploying code updates. I'm a single developer so I don't need CI or CD.

I understand docker-machine is deprecated. Docker Hub deals with images individually, so I understand I need something that deals with the concept of a "stack", or a set of related services. I understand that Docker Cloud's stack.yml files don't support build or env_file keys, so my docker-compose.yml is not directly usable

(In my docker-compose.yml I have many occurrences of the following pattern:

build:
  context: .
  dockerfile: platforms/frontend/server/Dockerfile

and in the Dockerfile, for example:

COPY platforms/frontend/server /app/platforms/frontend/server

Without the separation of build context and Dockerfile location, the compose file doesn't seem to translate to stack file).

Furthermore, I think that Docker Cloud / Swarm are for managing multiple fail-over servers and round-robin routing and so on? I don't think I need any of this.

Finally I started to realise docker-compose deploy exists... is this the tool/strategy I'm after?

Dagrada
  • 1,135
  • 12
  • 22
  • "I just want to avoid downtime when deploying code updates" - as in 0.00 microseconds downtime? – Constantin Galbenu Apr 12 '18 at 06:13
  • Say, less than a second. A session might fail for a small number of users, but if they hit refresh it'll load. At the moment (pre-Dockerization) any update should be negligible, e.g. "git checkout feature && php migrate.php" to switch branches and apply any DB migration. Given that docker-compose takes quite a few seconds (too long) to shut down and start up (if doing the naive ^C git checkout feature && docker-compose-up), I understand a swarm may be needed to apply rolling updates to avoid downtime. Is this is the simplest approach? – Dagrada Apr 12 '18 at 17:27
  • you can use docker and still have the same downtime. You could use `docker exec` to do the migration, without using `docker-compose`. You also need to map the files from the host into the container, in addition to having them inside the image; this is needed to not restart the containers. – Constantin Galbenu Apr 12 '18 at 17:39
  • I'm already using `docker exec` for the migration, but that doesn't address the code change, i.e. a `git pull` or `git checkout` on the host, or an environment variable change (i.e. updating the webpack release hash as described in the post). I understand images need to be rebuilt, so presumably I want to build them on my dev machine and push them to Hub, or a private registry, or Docker Cloud, then pull these into the production server with a single command or script executed in an ssh session. I'm not clear which is the recommended approach out of all the options, for my basic use case? – Dagrada Apr 12 '18 at 17:42
  • you don't need to rebuild the image. Just to put the code on the host and then map it using a volume inside a container. Image rebuilding is necessary if the host changes (i.e. when using Docker swarm). Let me know if I am not clear enough. – Constantin Galbenu Apr 12 '18 at 19:10
  • OK, but what about for the environment variable change? Currently each service in `docker-compose.yml` points to an environment file. When a front-end Javascript release is made, .js files are uploaded to CDN, and a variable inside the env file needs to be updated. Do I not need to stop and start docker-compose in order to pick up the changes to that file? – Dagrada Apr 12 '18 at 19:28
  • Some further testing suggests I do have to run `docker-compose up`, in order to pick up the changes. Even `rebuild` and `restart` doesn't do the job. So this does involve about half a minute downtime (not great). Surely there is a better way? – Dagrada Apr 12 '18 at 19:43
  • I don't see the relation to docker. The problem is how to change the environment variables of a running *proces*, that is happening to be running inside a container. – Constantin Galbenu Apr 12 '18 at 19:45
  • Correct but the environment question in question is stored in a file pointed to by docker-compose.yml using `env_file` key. How to update this file and have the changes reflected in a running process? (Or even not using the file; how to update the env var in a running process at all?) – Dagrada Apr 12 '18 at 20:40
  • technically it is possible to change the environment variables if a running process ( https://stackoverflow.com/a/211064/2575224 ) but it is a hack. They are meant to be read only when the program starts. You could use other means to configure the app. One way is to use `docker config`, other is to use a JSON or even a PHP file. – Constantin Galbenu Apr 12 '18 at 20:54
  • OK so what's the correct, typical, non-hacky approach? (Also I noticed that `docker config` is for Swarms, so this again suggests "Swarms is the correct approach" ?) – Dagrada Apr 12 '18 at 22:55
  • 1
    I don't think that there is a *typical* approach. You should use a combination of env variable (providing the defaults) with a json/yml/php file (providing overrides in case of new releases) if need to reuse the same process. – Constantin Galbenu Apr 13 '18 at 04:04

1 Answers1

5

Let me correct some things first, and then I'll get into the expected Docker strategy in this case where you say you "don't need CI/CD", which I assume means you'll manually deploy updates to the server yourself. This workflow won't be what I suggest for a team, but for the "solo dev" it's great and easy.

"I understand docker-machine is deprecated."

Not true. It gets constant updates, including a version last month. It's not designed for deploying/managing many servers, but if you really only need a single server for a single admin, it can be perfect for creating the instance remotely, installing docker, and setting up TLS certs for remote access via docker CLI: docker-machine env <nodename>

Finally I started to realise docker-compose deploy exists

That's not a command. Maybe you're thinking of docker stack deploy in Swarm? I also don't recommend docker-compose for a server. It doesn't have production tooling and features. See my AMA post on all the reasons to use a single node Swarm.

Note that docker-compose the CLI tool for dev and CI/CD is not the same as the docker-compose.yml file, which I'll discuss in a bit.

Furthermore, I think that Docker Cloud / Swarm are for managing multiple fail-over servers and round-robin routing and so on? I don't think I need any of this.

Docker Cloud is shutting down in May 2018, so I wouldn't use that to deploy stacks, but Swarm is great in a single node if you don't need node high-availability.

OK, so for your workflow from local dev to this prod server:

  1. Either manually build your image locally and push to Docker Hub (or other registries) or my preferred, store code in GitHub/Bitbucket and have the image built by Docker Hub on each commit to a specific branch (let's say master).

  2. Your docker-compose file is also a stack file. The compose documentation has specific sections for "build" (either for CI/CD server or your local machine workflow) and "deploy" (features on Swarm). You should be building locally or via Docker Hub or custom CI server, not in the Swarm itself. Production tools aren't usually meant for image building.

  3. Once your server is built (with docker-machine), you can use your local docker CLI to manage the remote docker engine with docker-machine env <name>. You would create a single-node Swarm with docker swarm init and voila, it'll accept compose files (aka stack files). These files are similar but not the same format as old Docker Cloud stacks.

  4. Now you can docker stack deploy -c compose.yml <stackname> and it'll spin up your services with the envvars you've set, volumes for data, etc.

  5. For updates, you can get zero downtime if you use 17.12 or higher version docker (latest 18.03 is even better), you set update-order: start-first, and you ensure all services have healthchecks defined so docker truly knows when they are ready for connections.

  6. You can use override yaml files, and docker-compose config to layer many compose files into a single stack deployment.

  7. For service updates you would just update the compose file and re-do a docker stack deploy and it'll detect changes.

  8. Be sure you use unique image tags each time so Docker knows which specific SHA to deploy. Don't keep using <imagename>:latest expecting it to know exactly which image that is.

I hope this helps, and ask more questions in comments and I can update this answer as needed.

Bret Fisher
  • 8,164
  • 2
  • 31
  • 36
  • Bret, thanks so much for the comprehensive writeup. You've given me a lot of pointers and material to go through - I will follow-up in due course. Appreciate it! – Dagrada Apr 16 '18 at 19:47
  • 1
    Hi Bret, I followed your advice to set up a single node swarm (ignoring the misleading "Compose in production" doc pages!). In fact I have two EC2 instances both running a single node swarm, and an elastic IP for blue-green deployment which gives me reliable ~1 second downtime for code updates which is great! (I did experiment with multi-node swarm for rolling updates but the blue-green setup seemed much better due to being able to test code "almost live". Multi-node can still be used for fail-over and scaling, which practically I don't need). Thanks again for the pointers! – Dagrada Jun 08 '18 at 19:40
  • 1
    For completeness, I've got a dedicated "cert" machine running LetsEncrypt in certonly mode with automated route-53 challenge negotiation and certificates written to an EFS volume shared with the two blue/green nodes. Any other required bind volumes are also shared EFS volumes so the blue/green containers are host-neutral. Pretty happy with this setup :) – Dagrada Jun 08 '18 at 19:45