I have setup a Kubernetes cluster v1.15.1 with external etcd cluster having 3 masters and 5 worker nodes. And etcd services are running in master node but as systemd service not as docker container.
If some how, one of the Kubernetes master node (master3) is corrupt and then we ran the command "kubeadm reset" to reset the configuration but this will not remove the details from etcd cluster as its external cluster
Now we want to understand if we delete the necessary information from etcd w.r.t to reset node ??
ALTERNATE OPTION We can login to other masters, get the node information and delete the respective node and re-init the configuration