52

When the pod controlled by daemonset,Some error occur in the pod and it's state will be CrashLoopBackOff, I want to delete these pods but not delete the DaemonSet.

So I want to scale daemonset to 0, as far as I known, DaemonSet Spec do not support the replica of the pod.

How can I get there?

litanhua
  • 1,437
  • 1
  • 11
  • 12
  • 1
    Why do you need to scale to 0? Why not just delete the pods and let it reschedule them? – ProgrammingLlama Dec 26 '18 at 08:59
  • there are many many Restart times, and pod state is "CrashLoopBackOff", maybe the developer doesn't care about this application. It still wastes the resources of the cluster and keeps restarting. – litanhua Dec 26 '18 at 09:15
  • 1
    What is the reason for failure when you do 'kubectl describe pod ' on one of the pods? Sounds like probes failing. If so you may be able to get it working by increasing the initialDelaySeconds or changing which rest endpoint is used for the probes. – Ryan Dawson Dec 26 '18 at 11:19

3 Answers3

135

In case you don't wanna delete the daemonset, one possible work-around is to use temporary nodeSelector with any non-existing label, for example:

kubectl -n <namespace> patch daemonset <name-of-daemon-set> -p '{"spec": {"template": {"spec": {"nodeSelector": {"non-existing": "true"}}}}}'

This will scale the daemonset down.

And here is the patch to remove temporary nodeSelector:

kubectl -n <namespace> patch daemonset <name-of-daemon-set> --type json -p='[{"op": "remove", "path": "/spec/template/spec/nodeSelector/non-existing"}]'

This will scale the daemonset up again.

Alex Vorona
  • 1,875
  • 1
  • 9
  • 7
9

DaemonSet ensures that every node run a copy of a Pod. So you can't scale down it as Deployment. DaemonSet use DaemonSet Controller and Deployment use Replication Controller for replications. So You can simply delete the DaemonSet.

If you want to backup the exact Daemonset deployment you can use following command and save it somewhere and use it again for later deployement.

kubectl get daemonset <name-of-daemon-set> -n <namespace> -o yaml
Hansika Weerasena
  • 3,046
  • 1
  • 13
  • 22
  • Yes, I know it, I think the workload Spec of kubernetes should be add a BackoffLimit feature, My problem will be solved. – litanhua Dec 26 '18 at 09:32
  • 2
    There is only backofflimit for Kubernetes Jobs. The main objective of Kubernetes is to increase availability of your application and give out zero downtime. So Kubernetes expect your software to be good, less buggy I think that's why they haven't introduced such. Running a crashing application is not recommended in a Kubernetes cluster. But if you don't have control over that software, this answer https://stackoverflow.com/questions/36845492/prevent-back-off-in-kubernetes-crash-loop shows a way to increase max back off time. – Hansika Weerasena Dec 26 '18 at 09:58
  • Your opinion is very accurate and I have benefited a lot. Thank you very much. – litanhua Dec 26 '18 at 10:19
7

only as addition to Alex Vorona's answer for scaling to more than 0 nodes:

scale to a single node:

kubectl -n <namespace> patch daemonset <name-of-daemon-set> -p '{"spec": {"template": {"spec": {"nodeSelector": {"kubernetes.io/hostname": "<hostname>"}}}}}'

scale to any number of nodes with some label:

kubectl -n <namespace> label nodes <name-of-node> someLabel=true
kubectl -n <namespace> patch daemonset <name-of-daemon-set> -p '{"spec": {"template": {"spec": {"nodeSelector": {"someLabel": "true"}}}}}'
marioneta
  • 71
  • 1
  • 2