1

I have setup a Kubernetes cluster v1.15.1 with external etcd cluster having 3 masters and 5 worker nodes. And etcd services are running in master node but as systemd service not as docker container.

If some how, one of the Kubernetes master node (master3) is corrupt and then we ran the command "kubeadm reset" to reset the configuration but this will not remove the details from etcd cluster as its external cluster

Now we want to understand if we delete the necessary information from etcd w.r.t to reset node ??

ALTERNATE OPTION We can login to other masters, get the node information and delete the respective node and re-init the configuration

Ankit Saxena
  • 83
  • 1
  • 8

1 Answers1

0

I would always suggest doing all operations via Kubernetes API instead of doing it directly on etcd

Deleting the node and adding it again should do the trick for you. https://stackoverflow.com/a/54220808/3514300 is how you can remove the node from the cluster

the gist is

kubectl get nodes
kubectl drain <node-name>
kubectl drain <node-name> --ignore-daemonsets --delete-local-data
kubectl delete node <node-name>
Tummala Dhanvi
  • 3,007
  • 2
  • 19
  • 35