87

I installed kubernetes cluster using kubeadm following this guide. After some period of time, I decided to reinstall K8s but run into troubles with removing all related files and not finding any docs on official site how to remove cluster installed via kubeadm. Did somebody meet the same problems and know the proper way of removing all files and dependencies? Thank you in advance.

For more information, I removed kubeadm, kubectl and kubelet using apt-get purge/remove but when I started installing the cluster again I got next errors:

[preflight] Some fatal errors occurred:
    Port 6443 is in use
    Port 10251 is in use
    Port 10252 is in use
    /etc/kubernetes/manifests is not empty
    /var/lib/kubelet is not empty
    Port 2379 is in use
    /var/lib/etcd is not empty
wprl
  • 24,489
  • 11
  • 55
  • 70
Kirill Liubun
  • 1,965
  • 1
  • 17
  • 35

7 Answers7

171

In my "Ubuntu 16.04", I use next steps to completely remove and clean Kubernetes (installed with "apt-get"):

kubeadm reset
sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube*   
sudo apt-get autoremove  
sudo rm -rf ~/.kube

And restart the computer.

alexander.polomodov
  • 5,396
  • 14
  • 39
  • 46
Rib47
  • 2,336
  • 2
  • 14
  • 15
  • 1
    I followed these steps, but now every time I open the terminal this message appears: `kubectl: command not found Command 'minikube' not found, did you mean: command 'minitube' from deb minitube Try: sudo apt install ` – Michael Pacheco Jan 25 '19 at 00:43
  • 1
    @MichaelPacheco You probably have some remains of minikube in `.bashrc` or other configuration. – ferrix Feb 27 '19 at 09:45
  • how to remove docker related images in one go? all starting with k8s.* – Kundan Jan 29 '20 at 18:26
  • Restart important as it will clear Iptables. – Alexred Jul 11 '22 at 12:52
81

use kubeadm reset command. this will un-configure the kubernetes cluster.

sfgroups
  • 18,151
  • 28
  • 132
  • 204
  • 2
    Thank you but I am looking for complete uninstall of kubeadm and all related dependencies to solve my root problem -- https://stackoverflow.com/questions/44717222/cant-see-logs-of-kubernetes-pod ) Before reinstallation all works fine and I was able to see logs. So, I considered removing K8s completely from my machine after the second installation because I think some wrong installed dependencies left and made the same issue appears after next installations. – Kirill Liubun Jun 25 '17 at 12:24
  • then you need to remove the kubernets and docker rpms and re-install them. – sfgroups Jun 26 '17 at 11:59
  • My containers kept restarting. -f flag forced to reset and stopped container restarts. docker reset -f – Pav K. Aug 31 '18 at 19:16
21

If you are clearing the cluster so that you can start again, then, in addition to what @rib47 said, I also do the following to ensure my systems are in a state ready for kubeadm init again:

kubeadm reset -f
rm -rf /etc/cni /etc/kubernetes /var/lib/dockershim /var/lib/etcd /var/lib/kubelet /var/run/kubernetes ~/.kube/*
iptables -F && iptables -X
iptables -t nat -F && iptables -t nat -X
iptables -t raw -F && iptables -t raw -X
iptables -t mangle -F && iptables -t mangle -X
systemctl restart docker

You then need to re-install docker.io, kubeadm, kubectl, and kubelet to make sure they are at the latest versions for your distribution before you re-initialize the cluster.

EDIT: Discovered that calico adds firewall rules to the raw table so that needs clearing out as well.

AnthonyK
  • 561
  • 5
  • 12
16
kubeadm reset 
/*On Debian base Operating systems you can use the following command.*/
# on debian base 
sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube* 


/*On CentOs distribution systems you can use the following command.*/
#on centos base
sudo yum remove kubeadm kubectl kubelet kubernetes-cni kube*


# on debian base
sudo apt-get autoremove

#on centos base
sudo yum autoremove

/For all/
sudo rm -rf ~/.kube
Amit Mishra
  • 498
  • 5
  • 16
  • 1
    While this code may solve the question, [including an explanation](https://meta.stackexchange.com/q/114762) of how and why this solves the problem would really help to improve the quality of your post, and probably result in more up-votes. Remember that you are answering the question for readers in the future, not just the person asking now. Please edit your answer to add explanations and give an indication of what limitations and assumptions apply. – David Buck Mar 18 '20 at 11:37
  • Lack little bit of explanation, but this answer should be top – Arfat Binkileb Dec 11 '20 at 10:01
12

The guide you linked now has a Tear Down section:

Talking to the master with the appropriate credentials, run:

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

Then, on the node being removed, reset all kubeadm installed state:

kubeadm reset
Matthew
  • 10,361
  • 5
  • 42
  • 54
  • Installed new ubuntu 18.04 - see kubernetes running - I dont know how it got installed. How do I delete - dont have kubeadm or kubectl in the system (that I can find) – Sam-T Dec 13 '19 at 21:33
11

If wanting to make it easily repeatable, it would make sense to make this into a script. This is assuming you are using a Debian based OS:

#!/bin/sh
# Kube Admin Reset
kubeadm reset

# Remove all packages related to Kubernetes
apt remove -y kubeadm kubectl kubelet kubernetes-cni 
apt purge -y kube*

# Remove docker containers/ images ( optional if using docker)
docker image prune -a
systemctl restart docker
apt purge -y docker-engine docker docker.io docker-ce docker-ce-cli containerd containerd.io runc --allow-change-held-packages

# Remove parts

apt autoremove -y

# Remove all folder associated to kubernetes, etcd, and docker
rm -rf ~/.kube
rm -rf /etc/cni /etc/kubernetes /var/lib/dockershim /var/lib/etcd /var/lib/kubelet /var/lib/etcd2/ /var/run/kubernetes ~/.kube/* 
rm -rf /var/lib/docker /etc/docker /var/run/docker.sock
rm -f /etc/apparmor.d/docker /etc/systemd/system/etcd* 

# Delete docker group (optional)
groupdel docker

# Clear the iptables
iptables -F && iptables -X
iptables -t nat -F && iptables -t nat -X
iptables -t raw -F && iptables -t raw -X
iptables -t mangle -F && iptables -t mangle -X

NOTE:

This will destroy everything related to Kubernetes, etcd, and docker on the Node/server this command is run against!

Finn
  • 113
  • 1
  • 5
8

I use the following scripts to completely uninstall an existing Kubernetes cluster and its running docker containers

sudo kubeadm reset

sudo apt purge kubectl kubeadm kubelet kubernetes-cni -y
sudo apt autoremove
sudo rm -fr /etc/kubernetes/; sudo rm -fr ~/.kube/; sudo rm -fr /var/lib/etcd; sudo rm -rf /var/lib/cni/

sudo systemctl daemon-reload

sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X

# remove all running docker containers
docker rm -f `docker ps -a | grep "k8s_" | awk '{print $1}'`
j3ffyang
  • 1,049
  • 12
  • 12