I've looked into it and it seems that the problem lays in kubelet version. Let me elaborate on that:
SELinux Volumes not relabeled in 1.16 - this link is providing more details about the issue.
I tried to reproduce this coredns issue on different versions of Kubernetes.
Issue shows on version 1.16 and newer. It seems to work properly with SELinux enabled on the version 1.15.6
For this to work you will need working CentOS and CRI-O environment.
CRI-O version:
Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.16.2
RuntimeApiVersion: v1alpha1
To deploy this insfrastructure I followed this site for the most part: KubeVirt
Kubernetes v1.15.7
Steps to reproduce:
- Disable SELinux and restart machine:
$ setenforce 0
$ sed -i s/^SELINUX=.*$/SELINUX=disabled/ /etc/selinux/config
$ reboot
- Check if SELinux is disabled by invoking command:
$ sestatus
- Install packages with
$ yum install INSERT_PACKAGES_BELOW
- kubelet-1.15.7-0.x86_64
- kubeadm-1.15.7-0.x86_64
- kubectl-1.15.7-0.x86_64
- Initialize Kubernetes cluster with following command
$ kubeadm init --pod-network-cidr=10.244.0.0/16
- Wait for cluster to initialize correctly and follow kubeadm instructions to connect to cluster
- Apply Flannel CNI
$ kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
Check if coredns pods are running correctly with command:
$ kubectl get pods -A
It should give similar output to that:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5c98db65d4-2c7lt 1/1 Running 2 7m59s
kube-system coredns-5c98db65d4-5dp9s 1/1 Running 2 7m59s
kube-system etcd-centos-kube-master 1/1 Running 2 7m20s
kube-system kube-apiserver-centos-kube-master 1/1 Running 2 7m4s
kube-system kube-controller-manager-centos-kube-master 1/1 Running 2 6m55s
kube-system kube-flannel-ds-amd64-mzh27 1/1 Running 2 7m14s
kube-system kube-proxy-bqll8 1/1 Running 2 7m58s
kube-system kube-scheduler-centos-kube-master 1/1 Running 2 6m58s
Coredns pods in kubernetes cluster with SELinux disabled are working properly.
Enable SELinux:
From root account invoke commands to enable SELinux and restart the machine:
$ setenforce 1
$ sed -i s/^SELINUX=.*$/SELINUX=enforcing/ /etc/selinux/config
$ reboot
Check if coredns pods are running correctly. They should not get crashloopbackoff error when running:
kubectl get pods -A
Kubernetes v1.16.4
Steps to reproduce:
- Run
$ kubeadm reset
if coming from another another version
- Remove old Kubernetes packages with
$ yum remove OLD_PACKAGES
- Disable SELinux and restart machine:
$ setenforce 0
$ sed -i s/^SELINUX=.*$/SELINUX=disabled/ /etc/selinux/config
$ reboot
- Check if SELinux is disabled by invoking command:
$ sestatus
- Install packages with
$ yum install INSERT_PACKAGES_BELOW
- kubelet-1.16.4-0.x86_64
- kubeadm-1.16.4-0.x86_64
- kubectl-1.16.4-0.x86_64
- Initialize Kubernetes cluster with following command
$ kubeadm init --pod-network-cidr=10.244.0.0/16
- Wait for cluster to initialize correctly and follow kubeadm instructions to connect to cluster
- Apply Flannel CNI
$ kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
Check if coredns pods are running correctly with command:
$ kubectl get pods -A
It should give similar output to that:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5644d7b6d9-fgbkl 1/1 Running 1 13m
kube-system coredns-5644d7b6d9-x6h4l 1/1 Running 1 13m
kube-system etcd-centos-kube-master 1/1 Running 1 12m
kube-system kube-apiserver-centos-kube-master 1/1 Running 1 12m
kube-system kube-controller-manager-centos-kube-master 1/1 Running 1 12m
kube-system kube-proxy-v52ls 1/1 Running 1 13m
kube-system kube-scheduler-centos-kube-master 1/1 Running 1 12m
Enable SELinux:
From root account invoke commands to enable SELinux and restart the machine:
$ setenforce 1
$ sed -i s/^SELINUX=.*$/SELINUX=enforcing/ /etc/selinux/config
$ reboot
After reboot coredns pods should enter crashloopbackoff state as shown below:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5644d7b6d9-fgbkl 0/1 CrashLoopBackOff 25 113m
kube-system coredns-5644d7b6d9-x6h4l 0/1 CrashLoopBackOff 25 113m
kube-system etcd-centos-kube-master 1/1 Running 1 112m
kube-system kube-apiserver-centos-kube-master 1/1 Running 1 112m
kube-system kube-controller-manager-centos-kube-master 1/1 Running 1 112m
kube-system kube-proxy-v52ls 1/1 Running 1 113m
kube-system kube-scheduler-centos-kube-master 1/1 Running 1 112m
Logs from the pod coredns-5644d7b6d9-fgbkl
show:
plugin/kubernetes: open /var/run/secrets/kubernetes.io/serviceaccount/token: permission denied