3

I using Kubernetes 1.12. I have a service (e.g. pod) which may have multiple instances (e.g. replicas > 1)

My goal is to perform a maintenance task (e.g. create\upgrade database, generate certificate, etc) before any of service instances are up.

I was considering to use Init Container, but at least as I understand, Init Container will be executed anytime additional replica (pod) is created and worse - that might happen in parallel. In that case, multiple Init Containers might work in parallel and thus corrupt my database and everything else.

I need a clear solution to perform a bootstrap maintenance task only once per deployment. How you would suggest to do that?

Illidan
  • 4,047
  • 3
  • 39
  • 49
  • What exactly does 'only once' per deployment mean? – Chris Stryczynski Nov 22 '18 at 10:10
  • @ChrisStryczynski: I mean "once per deployment 'apply'", e.g. when I applying new development (may be creating new pods or updating existing), I want to perform a maintenance task. – Illidan Nov 22 '18 at 10:38
  • If you mean each time you run a command `x`, then perhaps a Kubernetes job is the ideal way to have this done? – Chris Stryczynski Nov 22 '18 at 11:03
  • Have you automated your deployment in a CI/CD pipeline? If so, deploy a [Job](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/) that does your maintenance task(s), and [ensure the Job succeeds](https://stackoverflow.com/questions/44686568/kubernetes-tell-when-job-is-complete) or fail the pipeline before deploying your application. – Clorichel Nov 22 '18 at 17:34

2 Answers2

4

I encountered the same problem running db migrations before each deployment. Here's a solution based on a Job resource:

kubectl apply -f migration-job.yml 
kubectl wait --for=condition=complete --timeout=60s job/migration
kubectl delete job/migration
kubectl apply -f deployment.yml

migration-job.yml defines a Job configured with restartPolicy: Never and a reasonably low activeDeadlineSeconds. Using kubectl wait ensures that any errors or timeout in migration-job.yml causes the script to fail and thus prevent applying deployment.yml.

Aleksi
  • 4,483
  • 33
  • 45
  • Yes, eventually this is how i solved this. But, instead of deleting job manually, i let Kubernetes to handle that – Illidan Sep 01 '19 at 09:10
2

One of the ways you could use to retain startup sequence controll would be to use StatefulSet. With sequential startup, next pod will not start untill previous is done, removing parallel init risk.

Personally I would prefer this init to have its own locking mechanism and stick to regular Deploymants.

Remember that you need to take into account not only first startup on Deployment creation, but also cases for rolling releases, scaling, outages etc.

Radek 'Goblin' Pieczonka
  • 21,554
  • 7
  • 52
  • 48