2

I've googled few days and haven't found any decisions. I've tried to update k8s from 1.19.0 to 1.19.6 In Ubuntu-20. (cluster manually installed k81 - master and k82 - worker node)

# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[preflight] Some fatal errors occurred:
    [ERROR CoreDNSUnsupportedPlugins]: couldn't retrieve DNS addon deployments: deployments.apps is forbidden: User "system:node:k81" cannot list resource "deployments" in API group "apps" in the namespace "kube-system"
    [ERROR CoreDNSMigration]: couldn't retrieve DNS addon deployments: deployments.apps is forbidden: User "system:node:k81" cannot list resource "deployments" in API group "apps" in the namespace "kube-system"
    [ERROR kubeDNSTranslation]: configmaps "kube-dns" is forbidden: User "system:node:k81" cannot get resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'k81' and this object
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

When I try to list roles and permissions under kubernetes-admin user - it shows the same error with permissions:

~# kubectl get rolebindings,clusterrolebindings --all-namespaces
Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:node:k81" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope
Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:node:k81" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope

I can list pods and cluster nodes:

# kubectl get nodes 
NAME   STATUS   ROLES    AGE    VERSION
k81    Ready    master   371d   v1.19.6
k82    Ready    <none>   371d   v1.19.6

# kubectl get pods --all-namespaces 
NAMESPACE                   NAME                                            READY   STATUS      RESTARTS   AGE
gitlab-managed-apps         gitlab-runner-gitlab-runner-6bf497d6c9-g7rhc    1/1     Running     47         27d
gitlab-managed-apps         prometheus-kube-state-metrics-c6bbb8465-8kls5   1/1     Running     3          27d
ingress-nginx               ingress-nginx-controller-848bfcb64d-r6k6k       1/1     Running     3          27d
kube-system                 coredns-f9fd979d6-6dd42                         1/1     Running     1          24h
kube-system                 coredns-f9fd979d6-zjsnz                         1/1     Running     1          24h
kube-system                 csi-nfs-controller-5bd5cb55bc-76xdm             3/3     Running     69         27d
kube-system                 csi-nfs-controller-5bd5cb55bc-mkwmv             3/3     Running     61         27d
kube-system                 csi-nfs-node-b4v4g                              3/3     Running     18         49d
kube-system                 etcd-k81                                        1/1     Running     30         371d
kube-system                 kube-apiserver-k81                              1/1     Running     54         371d
kube-system                 kube-controller-manager-k81                     1/1     Running     27         5d22h
kube-system                 kube-flannel-ds-l4xkx                           1/1     Running     13         371d
kube-system                 kube-flannel-ds-rdm4n                           1/1     Running     5          371d
kube-system                 kube-proxy-4976l                                1/1     Running     5          371d
kube-system                 kube-proxy-g2fn4                                1/1     Running     11         371d
kube-system                 kube-scheduler-k81                              1/1     Running     330        371d
kube-system                 tiller-deploy-f5c865db5-zlgk9                   1/1     Running     5          27d

# kubectl  logs coredns-f9fd979d6-zjsnz  -n kube-system
Error from server (Forbidden): pods "coredns-f9fd979d6-zjsnz" is forbidden: User "system:node:k81" cannot get resource "pods/log" in API group "" in the namespace "kube-system"
# kubectl config get-contexts
CURRENT   NAME                          CLUSTER      AUTHINFO           NAMESPACE
*         kubernetes-admin@kubernetes   kubernetes   kubernetes-admin


# kubectl get csr
No resources found
Wytrzymały Wiktor
  • 11,492
  • 5
  • 29
  • 37
Ninja
  • 43
  • 8
  • Did you configure any [custom (Cluster)Roles, (Cluster)RoleBindings or users](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in your cluster? – Mikolaj S. Dec 30 '21 at 15:23
  • I don't remember If I configured custom Roles, but I was able to configure them early before upgrading. This cluster was added Gitlab-ce, and gitlab creates for every project separate namespace, roles and users. And it has wokred previously. – Ninja Dec 31 '21 at 11:27
  • authorization-mode=Node,RBAC As I understand that User "system:node:k81" doesn't have many needed permissions as master node user. But I don't understand if I connect under kubernetes-admin users, why it runs as User "system:node:k81"? – Ninja Dec 31 '21 at 11:30
  • >"This cluster was added Gitlab-ce, and gitlab creates for every project separate namespace, roles and users" - could you share steps how it creates roles and users? Probably some permissions are missing. Could you run `kubectl config get-contexts` and paste the output? – Mikolaj S. Jan 03 '22 at 17:05
  • can you provide "kubectl get csr" output? – Vasili Angapov Jan 03 '22 at 17:12
  • I've edited my question and added commands results at the end of first message – Ninja Jan 04 '22 at 14:29
  • Could you check [this answer](https://stackoverflow.com/questions/66516548/how-to-fix-error-user-cannot-get-resource-deployments-in-api-group-apps-in/66543384#66543384)? Something is broken in your cluster, do you have commands that you used to setup it? – Mikolaj S. Jan 06 '22 at 13:27
  • >This cluster was added Gitlab-ce, and gitlab creates for every project separate namespace, roles and users" - could you share steps how it creates roles and users? – Mikolaj S. Jan 06 '22 at 13:27
  • I've installed cluster in standart way from documentation and it worked for 6 months. Installation: [install-kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) and [create-cluster-kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) – Ninja Jan 13 '22 at 12:46
  • Thank you for your information. Could you please answer the rest of the questions? – Mikolaj S. Jan 13 '22 at 19:08
  • Could you check [this answer](https://stackoverflow.com/questions/66516548/how-to-fix-error-user-cannot-get-resource-deployments-in-api-group-apps-in/66543384#66543384)? – Mikolaj S. Jan 13 '22 at 19:09
  • >This cluster was added Gitlab-ce, and gitlab creates for every project separate namespace, roles and users" - could you share steps how it creates roles and users? – – Mikolaj S. Jan 13 '22 at 19:09
  • @Mikolaj S. yes, I've checked that answer and tried to copy admin.conf and checked if I under kubernetes-admin user - but it didn't helped. – Ninja Jan 14 '22 at 08:25
  • About gitlab-ce integration with kubernetes - I've added certificate and kuernetes master IP in gitlab settings. It creates new namespace and deploy project with [this script](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/blob/master/src/bin/auto-deploy) – Ninja Jan 14 '22 at 08:42
  • Are you sure that you are running all the commands on the master node? Are you using the same user on the Linux which was used during initialisation of the cluster? Are you using any environment variables, for example `$KUBECONFIG`? – Mikolaj S. Jan 19 '22 at 12:57
  • Could you run `cat ~/.kube/config `and copy the `client-certificate-data` under the `users: - name: kubernetes-admin` to some file and run `cat my-file | base64 -d | openssl x509 -noout --text` and check `Subject: ` ? Could you please [check which kubeconfig file are you using](https://stackoverflow.com/questions/68172643/finding-the-kubeconfig-file-being-used/68172779#68172779)? – Mikolaj S. Jan 19 '22 at 12:57
  • `Issuer: CN = kubernetes Validity Not Before: Dec 23 14:18:54 2020 GMT Not After : Dec 24 10:45:29 2022 GMT Subject: O = system:nodes, CN = system:node:k81` – Ninja Jan 24 '22 at 12:15
  • I don't use $KUBECONFIG and run commands on master node – Ninja Jan 24 '22 at 12:16
  • `# kubectl get pod -v6 2>&1 |awk '/Config loaded from file:/{print $NF}'` `/etc/kubernetes/admin.conf` – Ninja Jan 24 '22 at 12:17
  • 1
    Could you please run [`sudo kubeadm init phase kubeconfig admin --kubeconfig-dir=.` command](https://v1-19.docs.kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-kubeconfig) and then `sudo kubectl get rolebindings,clusterrolebindings --all-namespaces --kubeconfig ./admin.conf` command in the same directory and check if it is working now? – Mikolaj S. Jan 25 '22 at 12:14
  • @Mikolaj S. - it helped and works ! Thank you very much ! – Ninja Jan 27 '22 at 09:37

1 Answers1

3

The solution for the issue is to regenerate the kubeconfig file for the admin:

sudo kubeadm init phase kubeconfig admin --kubeconfig-dir=.

Above command will create the admin.conf file in the current directory (let's say it is /home/user/testing/) so when you are running kubectl commands you need to specify it using --kubeconfig {directory}/admin.conf flag, for example:

sudo kubectl get rolebindings,clusterrolebindings --all-namespaces --kubeconfig /home/user/testing/admin.conf

As you are using /etc/kubernetes/admin.conf file by default, you can delete it and create a new one in /etc/kubernetes directory:

sudo rm /etc/kubernetes/admin.conf
sudo kubeadm init phase kubeconfig admin --kubeconfig-dir=/etc/kubernetes/
Mikolaj S.
  • 2,850
  • 1
  • 5
  • 17