I was having K3s cluster with below pods running:
kube-system pod/calico-node-xxxx
kube-system pod/calico-kube-controllers-xxxxxx
kube-system pod/metrics-server-xxxxx
kube-system pod/local-path-provisioner-xxxxx
kube-system pod/coredns-xxxxx
xyz-system pod/some-app-xxx
xyz-system pod/some-app-db-xxx
I want to stop all of the K3s pods & reset the containerd state, so I used /usr/local/bin/k3s-killall.sh script and all pods got stopped (at least I was not able to see anything in watch kubectl get all -A
except The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
message)
Can someone tell me how to start the k3s server up because now after firing kubectl get all -A
I am getting message The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
PS:
- When I ran
k3s server
command, for fraction of seconds I can see the same above pods(with same pod ids) that I mentioned while the command is running. After few seconds, command get exited and again same messageThe connection to the...
start displaying.
Does this means that k3s-killall.sh
have not deleted my pods as it is showing the same pods with same ids ( like pod/some-app-xxx
) ?