12

I restarted my system today. After that my main system and the web browser are not connected to look for a kubernetes GUI.

When I ran the command systemctl status kube-apiserver.service, it gives output as shown below:

kube-apiserver.service
  Loaded: not-found (Reason: No such file or directory)
  Active: inactive (dead)

How can api-server be restarted?

David Medinets
  • 5,160
  • 3
  • 29
  • 42
Deepak Nayak
  • 157
  • 1
  • 1
  • 7

4 Answers4

10

Did you download and installed the Kubernetes Controller Binaries directly?

1 ) If so, check if the kube-apiserver.service systemd unit file exists:

cat /etc/systemd/system/kube-apiserver.service

2 ) If not, you probably installed K8S with .
With this setup the kubeapi-server is running as a pod on the master node:

kubectl get pods -n kube-system
NAME                                       READY   STATUS    
coredns-f9fd979d6-jsn6w                    1/1     Running  ..
coredns-f9fd979d6-tv5j6                    1/1     Running  ..
etcd-master-k8s                            1/1     Running  ..
kube-apiserver-master-k8s                  1/1     Running  .. #<--- Here
kube-controller-manager-master-k8s         1/1     Running  ..
kube-proxy-5kzbc                           1/1     Running  ..
kube-scheduler-master-k8s                  1/1     Running  ..

And not as a systemd service.

So, because you can't restart pods in K8S you'll have to delete it:

kubectl delete pod/kube-apiserver-master-k8s -n kube-system

And a new pod will be created immediately.


(*) When you run kubeadm init you should see the creation of the manifests for the control plane static Pods:

.
. 
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
.
.

The corresponding yamls:

ubuntu@master-k8s:/etc/kubernetes/manifests$ ls -la
total 24
drwxr-xr-x 2 root root 4096 Oct 14 00:13 .
drwxr-xr-x 4 root root 4096 Sep 29 02:30 ..
-rw------- 1 root root 2099 Sep 29 02:30 etcd.yaml
-rw------- 1 root root 3863 Oct 14 00:13 kube-apiserver.yaml <----- Here
-rw------- 1 root root 3496 Sep 29 02:30 kube-controller-manager.yaml
-rw------- 1 root root 1384 Sep 29 02:30 kube-scheduler.yaml

And the kube-apiserver spec:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.100.102.5:6443
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=10.100.102.5
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    .
    .
    .
Rot-man
  • 18,045
  • 12
  • 118
  • 124
  • 2
    That is not correct. You cannot run `kubectl delete pod/kube-apiserver-master-k8s -n kube-system` to restart the kube-apiserver container. This will delete the pod. The container will remain running. The pod will be recreated immediately, that's correct and reassign the running container to the pod without killing the containers process! Edit: If you want to restart the kube-apiserver, you have to kill the container itself. Via docker or crictl for example. – Nortol Mar 29 '21 at 12:22
  • 1
    I'm not sure about what you wrote on "This will delete the pod. The container will remain running". If you delete a pod, the container inside it will be deleted. In K8S, you can't control and manage containers without pods, this is how K8s works. – Rot-man Mar 29 '21 at 17:35
  • 1
    Maybe for Deployments, StatefulSets, etc. Today I tested this against cri-o. The delete command recreated the apiserver pod but the container was still running. The same process, as before. This gave me a big headache, because I thought exactly, what you wrote. But my apiserver container mounts a file not controlled by the manifests folder and a change to the configuration was not applied. Only restarting the container via cri-o helped, because the process was killed. The apiserver container process came up immediately because the June controller, I guess. – Nortol Mar 29 '21 at 18:08
  • I admit I do not understand how the following line make sense: "The delete command recreated the apiserver pod but the container was still running. The same process, as before." (: – Rot-man Mar 29 '21 at 19:16
  • 1
    Can confirm what @Nortol said. It's very weird, but deleting the pod keeps the container running and a new pod is created around it. Not sure is this is something specific to static pods or just a bug, but it also baffled me until I saw @Nortol's comment. I used `crictl stop` to kill the container directly and it worked. – ryanbrainard Nov 28 '21 at 15:01
6

move the kube-apiserver manifest file from /etc/kubernetes/manifests folder to a temporary folder. The advantage of this method is - you can stop the kube-apiserver as long as the file is removed from manifest folder.

vagrant@master01:~$ ll /etc/kubernetes/manifests/
total 16
-rw------- 1 root root 3315 May 12 23:24 kube-controller-manager.yaml
-rw------- 1 root root 1384 May 12 23:24 kube-scheduler.yaml
-rw------- 1 root root 2157 May 12 23:24 etcd.yaml
-rw------- 1 root root 3792 May 20 00:08 kube-apiserver.yaml
vagrant@master01:~$ sudo mv /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/
vagrant@master01:~$ 
vagrant@master01:~$ ll /etc/kubernetes/manifests/
total 12
-rw------- 1 root root 3315 May 12 23:24 kube-controller-manager.yaml
-rw------- 1 root root 1384 May 12 23:24 kube-scheduler.yaml
-rw------- 1 root root 2157 May 12 23:24 etcd.yaml

API Server is down now-

vagrant@master01:~$ k get pods -n kube-system
The connection to the server 10.0.0.2:6443 was refused - did you specify the right host or port?
vagrant@master01:~$ 

vagrant@master01:~$ sudo mv /tmp/kube-apiserver.yaml /etc/kubernetes/manifests/
vagrant@master01:~$ 
vagrant@master01:~$ ll /etc/kubernetes/manifests/
total 16
-rw------- 1 root root 3315 May 12 23:24 kube-controller-manager.yaml
-rw------- 1 root root 1384 May 12 23:24 kube-scheduler.yaml
-rw------- 1 root root 2157 May 12 23:24 etcd.yaml
-rw------- 1 root root 3792 May 20 00:08 kube-apiserver.yaml

API Server is up now

vagrant@master01:~$ k get pods -n kube-system
NAME                               READY   STATUS    RESTARTS   AGE
coredns-558bd4d5db-269lt           1/1     Running   5          8d
coredns-558bd4d5db-967d8           1/1     Running   5          8d
etcd-master01                      1/1     Running   6          8d
kube-apiserver-master01            0/1     Running   2          24h
kube-controller-manager-master01   1/1     Running   8          8d
kube-proxy-q8mkb                   1/1     Running   5          8d
kube-proxy-x6trg                   1/1     Running   6          8d
kube-proxy-xxph9                   1/1     Running   8          8d
kube-scheduler-master01            1/1     Running   8          8d
weave-net-rh2gb                    2/2     Running   18         8d
weave-net-s2cg9                    2/2     Running   14         8d
weave-net-wksk2                    2/2     Running   11         8d
vagrant@master01:~$ 
Amit Raj
  • 93
  • 1
  • 4
0

I had similar issue but done something simple to get arround this. I think its just systemctl status kube-apiserver.

If the above works Please try these steps

On Master:

Restart all services etcd kube-apiserver kube-controller-manager kube-scheduler flanneld

On Worker/Node:

Restart all services kube-proxy kubelet flanneld docker

E.g:

systemctl restart kube-controller-manager
systemctl enable kube-controller-manager
systemctl status kube-controller-manager

Note: if its node is both master and worker. Start both on same node.

The above steps worked for me (but we are working on 1.7). Hope that helps

Sudhakar MNSR
  • 594
  • 1
  • 3
  • 17
0

You can restart the api-server using:

systemctl restart kube-apiserver.service

However, if you don't want to SSH into a controller node, run the following command:

kubectl -n kube-system delete pod -l 'component=kube-apiserver'
David Medinets
  • 5,160
  • 3
  • 29
  • 42