40

I am configuring a Kubernetes cluster with 2 nodes in CoreOS as described in https://coreos.com/kubernetes/docs/latest/getting-started.html without flannel. Both servers are in the same network.

But I am getting: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kube-ca") while running kubelet in worker.

I configured the TLS certificates properly on both the servers as discussed in the doc.

The master node is working fine. And the kubectl is able to fire containers and pods in master.

Question 1: How to fix this problem?

Question 2: Is there any way to configure a cluster without TLS certificates?

Coreos version:
VERSION=899.15.0
VERSION_ID=899.15.0
BUILD_ID=2016-04-05-1035
PRETTY_NAME="CoreOS 899.15.0"

Etcd conf:

 $ etcdctl member list          
ce2a822cea30bfca: name=78c2c701d4364a8197d3f6ecd04a1d8f peerURLs=http://localhost:2380,http://localhost:7001 clientURLs=http://172.24.0.67:2379

Master: kubelet.service:

[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
Environment=KUBELET_VERSION=v1.2.2_coreos.0
ExecStart=/opt/bin/kubelet-wrapper \
  --api-servers=http://127.0.0.1:8080 \
  --register-schedulable=false \
  --allow-privileged=true \
  --config=/etc/kubernetes/manifests \
  --hostname-override=172.24.0.67 \
  --cluster-dns=10.3.0.10 \
  --cluster-domain=cluster.local
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

Master: kube-controller.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-controller-manager
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-controller-manager
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - controller-manager
    - --master=http://127.0.0.1:8080
    - --leader-elect=true 
    - --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --root-ca-file=/etc/kubernetes/ssl/ca.pem
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10252
      initialDelaySeconds: 15
      timeoutSeconds: 1
    volumeMounts:
    - mountPath: /etc/kubernetes/ssl
      name: ssl-certs-kubernetes
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/ssl
    name: ssl-certs-kubernetes
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host

Master: kube-proxy.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-proxy
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - proxy
    - --master=http://127.0.0.1:8080
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
  volumes:
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host

Master: kube-apiserver.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-apiserver
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-apiserver
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - apiserver
    - --bind-address=0.0.0.0
    - --etcd-servers=http://172.24.0.67:2379
    - --allow-privileged=true
    - --service-cluster-ip-range=10.3.0.0/24
    - --secure-port=443
    - --advertise-address=172.24.0.67
    - --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
    - --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
    - --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --client-ca-file=/etc/kubernetes/ssl/ca.pem
    - --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    ports:
    - containerPort: 443
      hostPort: 443
      name: https
    - containerPort: 8080
      hostPort: 8080
      name: local
    volumeMounts:
    - mountPath: /etc/kubernetes/ssl
      name: ssl-certs-kubernetes
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/ssl
    name: ssl-certs-kubernetes
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host

Master: kube-scheduler.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-scheduler
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-scheduler
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - scheduler
    - --master=http://127.0.0.1:8080
    - --leader-elect=true
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10251
      initialDelaySeconds: 15
      timeoutSeconds: 1

Slave: kubelet.service

[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests

Environment=KUBELET_VERSION=v1.2.2_coreos.0 
ExecStart=/opt/bin/kubelet-wrapper \
  --api-servers=https://172.24.0.67:443 \
  --register-node=true \
  --allow-privileged=true \
  --config=/etc/kubernetes/manifests \
  --hostname-override=172.24.0.63 \
  --cluster-dns=10.3.0.10 \
  --cluster-domain=cluster.local \
  --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
  --tls-cert-file=/etc/kubernetes/ssl/worker.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

Slave: kube-proxy.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-proxy
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - proxy
    - --master=https://172.24.0.67:443
    - --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml
    - --proxy-mode=iptables
    securityContext:
      privileged: true
    volumeMounts:
      - mountPath: /etc/ssl/certs
        name: "ssl-certs"
      - mountPath: /etc/kubernetes/worker-kubeconfig.yaml
        name: "kubeconfig"
        readOnly: true
      - mountPath: /etc/kubernetes/ssl
        name: "etc-kube-ssl"
        readOnly: true
  volumes:
    - name: "ssl-certs"
      hostPath:
        path: "/usr/share/ca-certificates"
    - name: "kubeconfig"
      hostPath:
        path: "/etc/kubernetes/worker-kubeconfig.yaml"
    - name: "etc-kube-ssl"
      hostPath:
        path: "/etc/kubernetes/ssl"
jonatan
  • 9,011
  • 2
  • 30
  • 34
Nakshatra
  • 663
  • 1
  • 6
  • 14

9 Answers9

60
mkdir -p $HOME/.kube   
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config   
sudo chown $(id -u):$(id -g) $HOME/.kube/config
yasin lachini
  • 5,188
  • 6
  • 33
  • 56
  • 36
    Welcome to SO! Please add some details explaining what your answer does, it will be more helpful for the OP and future readers of the post. – EcologyTom Feb 17 '19 at 08:27
  • 5
    i confirm that this solutions works. i created the cluster via kubeadm init, after that i deleted cluster via kubeadm and i recreated the cluster via kubeadm init. But i didnt delete old config from $HOME. And i got the error described above. So i tried to use the newcluster with the old config file that is with the old k8s cert. Thats why it didnt work. After i replace config from /etc to $HOME all is fine now. so in my opinion if you get x509 error it means you are trying yo use old config in your $HOME from some old cluster. – Alex Sep 21 '19 at 18:15
  • I confirm this solution worked for me too, even if some explanations would be very welcome for beginners like me. – Rémi Gaudin Jun 10 '20 at 08:16
  • this should be accepted as the answer, it worked for me too – mhn_namak Feb 16 '21 at 21:14
  • 2
    This is what kubeadm says after kubeadm init. Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube....cp -i.... – Serve Laurijssen Jun 03 '21 at 05:46
29

From kubernetes official site:

  1. Verify that the $HOME/.kube/config file contains a valid certificate, and regenerate a certificate

  2. Unset the KUBECONFIG environment variable using:

    unset KUBECONFIG

    Or set it to the default KUBECONFIG location:

    export KUBECONFIG=/etc/kubernetes/admin.conf

  3. Another workaround is to overwrite the existing kubeconfig for the “admin” user:

    mv  $HOME/.kube $HOME/.kube.bak
    mkdir $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

Reference: official site link reference

milo526
  • 5,012
  • 5
  • 41
  • 60
Umar Hayat
  • 4,300
  • 1
  • 12
  • 27
4

Please see this as a reference and maybe help you resolve your issue by exporting your certs:

kops export kubecfg "your cluster-name"
export KOPS_STATE_STORE=s3://"paste your S3 store"

Hope that will help.

jmrk
  • 34,271
  • 7
  • 59
  • 74
JohnBegood
  • 652
  • 1
  • 7
  • 8
1

Well, to answer your first question I think you have to do a few things to resolve your problem.

First, run the command given in this link : kubernetes.io/docs/setup/independent/create-cluster-kubeadm‌​/…

Complete with those commands :

  • mkdir -p $HOME/.kube
  • sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  • sudo chown $(id -u):$(id -g) $HOME/.kube/config

This admin.conf should be known to kubectl so as to work properly.

arshbot
  • 12,535
  • 14
  • 48
  • 71
Abhay Dwivedi
  • 1,500
  • 2
  • 16
  • 22
1

The above mentioned regular method does not work. I have tried to use the complete commands for a successful certificate. Please see the commands as follows.

$ sudo kubeadm reset
$ sudo swapoff -a 

$ sudo kubeadm init --pod-network-cidr=10.244.10.0/16 --kubernetes- 
  version "1.18.3"
$ sudo rm -rf $HOME/.kube

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

$ sudo systemctl enable docker.service
$ sudo service kubelet restart

$ kubectl get nodes

Notes:

If the port refuses to be connected, please add the following command.

$ export KUBECONFIG=$HOME/admin.conf
Matthias
  • 1,150
  • 20
  • 38
Mike Chen
  • 377
  • 3
  • 5
1

I had the problem persist even after:

mkdir -p $HOME/.kube   
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config   
sudo chown $(id -u):$(id -g) $HOME/.kube/config

In that case, restarting kubelet solved the problem:

systemctl restart kubelet
Gabriel Fernandez
  • 580
  • 1
  • 3
  • 14
0

I found this error in coredns pods, pod creation failed due to x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kube-ca") The issue was for me is that i already installed a k8s cluster before on the same node, and i used the kubeadm reset command to remove the cluster. This command left behind some files in /etc/cni/ that probably caused the issue for me. I deleted this folder and reinstalled the cluster with kubeadm init.

0

For anyone like me who is facing same error only in vs code Kubernetes extension.

I reinstalled docker/Kubernetes and didn't update vs code Kubernetes extension

You need to make sure you are using the correct kubeconfig since reinstalling Kubernetes creates a new certificate.

Either use $HOME/.kube/config in setKubeconfig option or just copy it to path where you have set vs code extension to read the config from. Using following command

cp $HOME/.kube/config /{{path-for-kubeconfig}}
Yousef khan
  • 2,764
  • 3
  • 14
  • 16
-1

I followed the below steps and the problem resolved.

  1. Take the backup of original file. cp /etc/kubernetes/admin.conf /etc/kubernetes/admin.conf_bkp

  2. Create a symlink file in the user's home directory inside the ./kube/ ln -s /etc/kubernetes/admin.conf $HOME/.kube/config

Now the original configuration is linked with main admin.conf file which resolved the problem.

Sandeep Goutam
  • 144
  • 1
  • 3