3

I'm experiencing a strange behavior from newly created Kubernetes service accounts. It appears that their tokens provide limitless access permissions in our cluster.

If I create a new namespace, a new service account inside that namespace, and then use the service account's token in a new kube config, I am able to perform all actions in the cluster.

# SERVER is the only variable you'll need to change to replicate on your own cluster
SERVER=https://k8s-api.example.com
NAMESPACE=test-namespace
SERVICE_ACCOUNT=test-sa

# Create a new namespace and service account
kubectl create namespace "${NAMESPACE}"
kubectl create serviceaccount -n "${NAMESPACE}" "${SERVICE_ACCOUNT}"

SECRET_NAME=$(kubectl get serviceaccount "${SERVICE_ACCOUNT}" --namespace=test-namespace -o jsonpath='{.secrets[*].name}')
CA=$(kubectl get secret -n "${NAMESPACE}" "${SECRET_NAME}" -o jsonpath='{.data.ca\.crt}')
TOKEN=$(kubectl get secret -n "${NAMESPACE}" "${SECRET_NAME}" -o jsonpath='{.data.token}' | base64 --decode)

# Create the config file using the certificate authority and token from the newly created
# service account
echo "
apiVersion: v1
kind: Config
clusters:
- name: test-cluster
  cluster:
    certificate-authority-data: ${CA}
    server: ${SERVER}
contexts:
- name: test-context
  context:
    cluster: test-cluster
    namespace: ${NAMESPACE}
    user: ${SERVICE_ACCOUNT}
current-context: test-context
users:
- name: ${SERVICE_ACCOUNT}
  user:
    token: ${TOKEN}
" > config

Running that ^ as a shell script yields a config in the current directory. The problem is, using that file, I'm able to read and edit all resources in the cluster. I'd like the newly created service account to have no permissions unless I explicitly grant them via RBAC.

# All pods are shown, including kube-system pods
KUBECONFIG=./config kubectl get pods --all-namespaces

# And I can edit any of them
KUBECONFIG=./config kubectl edit pods -n kube-system some-pod

I haven't added any role bindings to the newly created service account, so I would expect it to receive access denied responses for all kubectl queries using the newly generated config.

Below is an example of the test-sa service account's JWT that's embedded in config:

{
  "iss": "kubernetes/serviceaccount",
  "kubernetes.io/serviceaccount/namespace": "test-namespace",
  "kubernetes.io/serviceaccount/secret.name": "test-sa-token-fpfb4",
  "kubernetes.io/serviceaccount/service-account.name": "test-sa",
  "kubernetes.io/serviceaccount/service-account.uid": "7d2ecd36-b709-4299-9ec9-b3a0d754c770",
  "sub": "system:serviceaccount:test-namespace:test-sa"

}

Things to consider...

  • RBAC seems to be enabled in the cluster as I see rbac.authorization.k8s.io/v1 and rbac.authorization.k8s.io/v1beta1 in the output of kubectl api-versions | grep rbac as suggested in this post. It is notable that kubectl cluster-info dump | grep authorization-mode, as suggested in another answer to the same question, doesn't show output. Could this suggest RBAC isn't actually enabled?
  • My user has cluster-admin role privileges, but I would not expect those to carry over to service accounts created with it.
  • We're running our cluster on GKE.
  • As far as I'm aware, we don't have any unorthodox RBAC roles or bindings in the cluster that would cause this. I could be missing something or am generally unaware of K8s RBAC configurations that would cause this.

Am I correct in my assumption that newly created service accounts should have extremely limited cluster access, and the above scenario shouldn't be possible without permissive role bindings being attached to the new service account? Any thoughts on what's going on here, or ways I can restrict the access of test-sa?

Brannon
  • 1,286
  • 2
  • 21
  • 36
  • 1
    Try to run this command to make sure that the command is running using you new service account: `kubectl config view --template='{{ range .contexts }}{{ if eq .name "'$(kubectl config current-context)'" }}Current user: {{ .context.user }}{{ end }}{{ end }}'` – Mr.KoopaKiller Mar 09 '20 at 10:57
  • Yep, the `kubectl` is indeed using the correct context: `Current user: test-sa`. – Brannon Mar 09 '20 at 14:44
  • Weird, because I've ran exactly the same commands you posted and for me the test commands showed `Error from server (Forbidden): pods "some-pod" is forbidden: User "system:serviceaccount:test-namespace:test-sa" cannot get resource "pods" in API group "" in the namespace "kube-system"`. I already check with the `kubectl auth can-i` and my result doesn't have the `*.*` resources... I'm trying to figure out why it is happening. What's kubernetes version? – Mr.KoopaKiller Mar 09 '20 at 15:06
  • I've just created a brand new GKE cluster (version 1.15.x, same as our other cluster that is experiencing the issue), and it appears that newly created service accounts do not have the same overly permissive access in this stock cluster as they do in the cluster exhibiting the issue. We receive the same forbidden error you describe, and we also don't have the `*.*` rule listed in the `kubectl auth can-i` command. This makes me think @ArghyaSadhu is correct that something has been configured improperly in our other cluster. – Brannon Mar 09 '20 at 15:55

3 Answers3

3

You can check the permission of the service account by running command

kubectl auth can-i --list --as=system:serviceaccount:test-namespace:test-sa

If you see below output that's the very limited permission by default a service account gets.

Resources                                       Non-Resource URLs   Resource Names   Verbs
selfsubjectaccessreviews.authorization.k8s.io   []                  []               [create]
selfsubjectrulesreviews.authorization.k8s.io    []                  []               [create]
                                                [/api/*]            []               [get]
                                                [/api]              []               [get]
                                                [/apis/*]           []               [get]
                                                [/apis]             []               [get]
                                                [/healthz]          []               [get]
                                                [/healthz]          []               [get]
                                                [/livez]            []               [get]
                                                [/livez]            []               [get]
                                                [/openapi/*]        []               [get]
                                                [/openapi]          []               [get]
                                                [/readyz]           []               [get]
                                                [/readyz]           []               [get]
                                                [/version/]         []               [get]
                                                [/version/]         []               [get]
                                                [/version]          []               [get]
                                                [/version]          []               [get]
Arghya Sadhu
  • 41,002
  • 9
  • 78
  • 107
  • This is a very useful command! Here is the output from our cluster: https://pastebin.com/raw/zXbk8fR6. The two lines at the top look particularly suspicious, as they aren't in your example and they appear to be giving wide permissions across cluster resources. – Brannon Mar 07 '20 at 15:09
  • Any idea what would cause brand new service accounts to receive permissions like this? – Brannon Mar 07 '20 at 15:11
  • Do you have a external webhook which might be doing a RBAC i.e assigning admin role to any service account? what do you get if you do kubectl get rolebinding – Arghya Sadhu Mar 07 '20 at 15:14
  • `kubectl get rolebinding` in both the `default` and `test-namespace` namespace, doesn't show any unexpected role bindings. Investigating if we are doing auth with a an external webhook... – Brannon Mar 09 '20 at 14:58
2

I could not reproduce your issue on three different K8S versions in my test lab (including v1.15.3, v1.14.10-gke.17, v1.11.7-gke.12 - with basic auth enabled).

Unfortunately token based log-in activities are not recorded in AuditLogs of Cloud Logs console for GKE clusters :(.

To my knowledge only data-access operations, that go through Google Cloud are recorded (AIM based = kubectl using google auth provider).

If your "test-sa" service account is somehow permitted to do specific operation by RBAC, I would still try to study Audit Logs of your GKE cluster. Maybe somehow your service account is being mapped to google service account one, and thus authorized.

You can always contact official support channel of GCP, to troubleshot further your unusual case.

Nepomucen
  • 4,449
  • 3
  • 9
  • 24
2

It turns out an overly permissive cluster-admin ClusterRoleBinding was bound to the system:serviceaccounts group. This resulted in all service accounts in our cluster having cluster-admin privileges.

It seems like somewhere early on in the cluster's life the following ClusterRoleBinding was created:

kubectl create clusterrolebinding serviceaccounts-cluster-admin --clusterrole=cluster-admin  --group=system:serviceaccounts

WARNING: Never apply this rule to your cluster ☝️

We have since removed this overly permissive rule and rightsized all service account permissions.

Thank you to the folks that provided useful answers and comments to this question. They were helpful in determining this issue. This was a very dangerous RBAC configuration and we are pleased to have it resolved.

Brannon
  • 1,286
  • 2
  • 21
  • 36