That's what I do:
- Deploy a stateful set. The pod will always exit with an error to provoke a failing pod in status
CrashLoopBackOff
:kubectl apply -f error.yaml
- Change error.yaml (
echo a
=>echo b
) and redeploy stateful set:kubectl apply -f error.yaml
- Pod keeps the error status and will not immediately redeploy but wait until the pod is restarted after some time.
Requesting pod status:
$ kubectl get pod errordemo-0
NAME READY STATUS RESTARTS AGE
errordemo-0 0/1 CrashLoopBackOff 15 59m
error.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: errordemo
labels:
app.kubernetes.io/name: errordemo
spec:
serviceName: errordemo
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: errordemo
template:
metadata:
labels:
app.kubernetes.io/name: errordemo
spec:
containers:
- name: demox
image: busybox:1.28.2
command: ['sh', '-c', 'echo a; sleep 5; exit 1']
terminationGracePeriodSeconds: 1
Questions
How can I achieve an immediate redeploy even if the pod has an error status?
I found out these solutions but I would like to have a single command to achieve that (In real life I'm using helm and I just want to call helm upgrade
for my deployments):
- Kill the pod before the redeploy
- Scale down before the redeploy
- Delete the statefulset before the redeploy
Why doesn't kubernetes redeploy the pod at once?
- In my demo example I have to wait until kubernetes tries to restart the pod after waiting some time.
- A pod with no error (e.g.
echo a; sleep 10000;
) will be restarted immediately. That's why I setterminationGracePeriodSeconds: 1
- But in my real deployments (where I use helm) I also encountered the case that the pods are never redeployed. Unfortunately I cannot reproduce this behaviour in a simple example.