5

That's what I do:

  1. Deploy a stateful set. The pod will always exit with an error to provoke a failing pod in status CrashLoopBackOff: kubectl apply -f error.yaml
  2. Change error.yaml (echo a => echo b) and redeploy stateful set: kubectl apply -f error.yaml
  3. Pod keeps the error status and will not immediately redeploy but wait until the pod is restarted after some time.

Requesting pod status:

$ kubectl get pod errordemo-0
NAME          READY   STATUS             RESTARTS   AGE
errordemo-0   0/1     CrashLoopBackOff   15         59m

error.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: errordemo
  labels:
    app.kubernetes.io/name: errordemo
spec:
  serviceName: errordemo
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: errordemo
  template:
    metadata:
      labels:
        app.kubernetes.io/name: errordemo
    spec:
      containers:
        - name: demox
          image: busybox:1.28.2
          command: ['sh', '-c', 'echo a; sleep 5; exit 1']
      terminationGracePeriodSeconds: 1

Questions

How can I achieve an immediate redeploy even if the pod has an error status? I found out these solutions but I would like to have a single command to achieve that (In real life I'm using helm and I just want to call helm upgrade for my deployments):

  • Kill the pod before the redeploy
  • Scale down before the redeploy
  • Delete the statefulset before the redeploy

Why doesn't kubernetes redeploy the pod at once?

  • In my demo example I have to wait until kubernetes tries to restart the pod after waiting some time.
  • A pod with no error (e.g. echo a; sleep 10000;) will be restarted immediately. That's why I set terminationGracePeriodSeconds: 1
  • But in my real deployments (where I use helm) I also encountered the case that the pods are never redeployed. Unfortunately I cannot reproduce this behaviour in a simple example.
Matthias M
  • 12,906
  • 17
  • 87
  • 116
  • You could add a new annotation to your deploy template, like revision or something similar, add the variable {{ .Release.Revision }}. This should force a deployment with a new revision if you run helm upgrade ... Hope this helps. If not, then maybe I didn't get, what you mean. – Manuel Mar 14 '21 at 22:47

1 Answers1

2

You could set spec.podManagementPolicy: "Parallel"

Parallel pod management tells the StatefulSet controller to launch or terminate all Pods in parallel, and not to wait for Pods to become Running and Ready or completely terminated prior to launching or terminating another Pod.

Remember that the default podManagementPolicy is OrderedReady

OrderedReady pod management is the default for StatefulSets. It tells the StatefulSet controller to respect the ordering guarantees demonstrated above

And if your application requires ordered update then there is nothing you can do.

Matthias M
  • 12,906
  • 17
  • 87
  • 116
Matt
  • 7,419
  • 1
  • 11
  • 22
  • Thanks, this helps! I've got just one pod in my StatefulSet. I'm using StatefulSets because I need a singleton. – Matthias M Mar 16 '21 at 19:43
  • Just another question: My usecase is a strict singleton. Does the Parallel option affect the rule, that the pod must be shutdown before a new one is created? What does "wait for ... completely terminated" exactly mean? In my tests everything seems to work as expected. I.e. the pod is first shutdown before a new one is created. – Matthias M Mar 16 '21 at 19:45
  • In case of `OrderedReady`, when having e.g. 3 pods pod-0, pod-1, pod-2, k8s will remove pod-2 first and make sure is terminated before it starts removing pod-1. When pod-1 terminated completely then and only then it starts terminating pod-0. After they are all terminated it will start spining up new pods. one by one. – Matt Mar 22 '21 at 12:12
  • In case of Parallel policy, k8s still waits for pods to terminate before it starts spining up now pods in their place, but does this in paralel. This mean that when having 3 pods, they all can get terminated in one time and once a pod is terminated completely, k8s will spin up a new pod in its place. – Matt Mar 22 '21 at 12:16
  • 1
    this is good because starefulsets as the name suggests, are stateful - this means that you most likely are using persistent volumes to persist the state somewhere. In this case when upgrading, you most likely want new pods to inherit this data volumes after the old pds, but in order for it to happen, old pod has to release the volume and this is why k8s waits for old pods to terminate completely, to release volumes so that they can be mounted in new pods. I hope that this makes sens :D – Matt Mar 22 '21 at 12:23