32

I have database migrations which I'd like to run before deploying a new version of my app into a Kubernetes cluster. I want these migrations to be run automatically as part of a Continuous Delivery pipeline. The migration will be encapsulated as a container image. What's the best mechanism to achieve this?

Requirements for a solution:

  • be able to determine if a migration failed so that we don't subsequently try to deploy a new version of the app into the cluster.
  • give up if a migration fails - don't keep retrying it.
  • be able to access logs to diagnose failed migrations.

I had assumed that the Jobs functionality in Kubernetes would make this easy, but there appear to be a few challenges:

Would using "bare pods" be a better approach? If so, how might that work?

Pete Hodgson
  • 15,644
  • 5
  • 38
  • 46

3 Answers3

20

blocking while waiting on the result of a queued-up job seems to require hand-rolled scripts

This isn't necessary anymore thanks to the kubectl wait command.

Here's how I'm running db migrations in CI:

kubectl apply -f migration-job.yml
kubectl wait --for=condition=complete --timeout=60s job/migration
kubectl delete job/migration

In case of failure or timeout, one of the two first CLI commands returns with an erroneous exit code which then forces the rest of the CI pipeline to terminate.

migration-job.yml describes a kubernetes Job resource configured with restartPolicy: Never and a reasonably low activeDeadlineSeconds.

You could also use the spec.ttlSecondsAfterFinished attribute instead of manually running kubectl delete but that's still in alpha at the time of writing and not supported by Google Kubernetes Engine at least.

Aleksi
  • 4,483
  • 33
  • 45
10

You could try to make both the migration jobs and app independent of each other by doing the following:

  • Have the migration job return successfully even when the migration failed. Keep a machine-consumable record somewhere of what the outcome of the migration was. This could be done either explicitly (by, say, writing the latest schema version into some database table field) or implicitly (by, say, assuming that a specific field must have been created along a successful migration job). The migration job would only return an error code if it failed for technical reasons (auch as unavailability of the database that the migration should be applied to). This way, you can do the migrations via Kubernetes Jobs and rely on its ability to run to completion eventually.
  • Built the new app version such that it can work with the database in both pre and post migration phases. What this means depends on your business requirements: The app could either turn idle until the migration has completed successfully, or it could return different results to its clients depending on the current phase. The key point here is that the app processes the migration outcome that the migration jobs produced previously and acts accordingly without terminating erroneously.

Combining these two design approaches, you should be able to develop and execute the migration jobs and app independently of each other and not have to introduce any temporal coupling.

Whether this idea is actually reasonable to implement depends on more specific details of your case, such as the complexity of your database migration efforts. The alternative, as you mentioned, is to simply deploy unmanaged pods into the cluster that do the migration. This requires a bit more wiring as you will need to regularly check the result and distinguish between successful and failed outcomes.

Timo Reimann
  • 9,359
  • 2
  • 28
  • 25
2

Considering the age of this question I'm not sure if initContainers were available at the time but they are super helpful now.

https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

The way I recently set this up was to have a postgres pod and our django application running in the same namespace, however the django pod has 3 initContainers:

  1. init-migrations
  2. init-fixtures
  3. init-createsuperUser

What this will do is run the django pod and the postgres pod in parallel but also continuously run the initContainers until the postgres pod comes up and then your migrations should run.

As for the pods perpetually restarting, maybe they've fixed the restartPolicy by now. I'm currently pretty new to kubernetes but this is what I've found works for me.

Recur
  • 1,418
  • 2
  • 17
  • 30
  • 5
    What if you have multiple replicas of the same pod ... are the init containers run for each one or only once? How do you make sure its only run once? – Christopher Rivera Oct 25 '18 at 14:34
  • That is a very good question. The check can happen in the application layer where the application would be able to migrations/fixtures/create super user if everything is up to date. The other way that could be done is to create a job and under the spec use annotations with a post install: ``` annotations: # This is what defines this resource as a hook. Without this line, the # job is considered part of the release. "helm.sh/hook": post-install,post-upgrade "helm.sh/hook-weight": "0" "helm.sh/hook-delete-policy": hook-succeeded ``` – Recur Oct 30 '18 at 14:08
  • 10
    `initContainers` are indeed ran for each replica so IMO they're not a feasible way to run database migrations. In case of a sudden load spike you don't want to end up running your migrations in 10 parallel containers! – Aleksi Jul 30 '19 at 10:38