6

I have created a fresh AWS SSO (used internal IDP as identity source, so no use of Active Directory).
I am able to login to AWS CLI, AWS GUI, but unable to perform any kubectl ops.

error: You must be logged in to the server (Unauthorized)

This has something to do with the RBAC I think as I am able to get EKS token via aws eks get-token.

➜ cat ~/.aws/config

[profile team-sso-admin]
sso_start_url=https://team.awsapps.com/start
sso_region=us-west-2
sso_account_id=1111111111
sso_role_name=AdministratorAccess
region=us-west-2
credential_process = aws-vault exec team-sso-admin --json


➜ aws-vault exec team-sso-admin --debug -- zsh --login
➜ env | grep AWS
AWS_VAULT_PROMPT=pass
AWS_VAULT_BACKEND=pass
AWS_VAULT=team-sso-admin
AWS_DEFAULT_REGION=us-west-2
AWS_REGION=us-west-2
AWS_ACCESS_KEY_ID=xxx
AWS_SECRET_ACCESS_KEY=xxx
AWS_SESSION_TOKEN=xxx
AWS_SECURITY_TOKEN=yyy
AWS_SESSION_EXPIRATION=2021-01-11T05:55:51Z
AWS_SDK_LOAD_CONFIG=1

➜ aws sts get-caller-identity --output yaml 

Account: '111111111111'
Arn: arn:aws:sts::111111111111:assumed-role/AWSReservedSSO_AdministratorAccess_6c71da2aa3076dfb/TestUser
UserId: XXX:TestUser

➜ aws eks get-token --cluster-name team-shared-eks --role arn:aws:iam::111111111111:role/aws-reserved/sso.amazonaws.com/us-west-2/AWSReservedSSO_AdministratorAccess_67d1da2aa3076dfb

{"kind": "ExecCredential", "apiVersion": "client.authentication.k8s.io/v1alpha1", "spec": {}, "status": {"expirationTimestamp": "2021-01-11T02:49:11Z", "token": "xxx"}}

kubeconfig

config

- name: arn:aws:eks:us-west-2:111111111111:cluster/team-shared-eks
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-west-2
      - eks
      - get-token
      - --cluster-name
      - team-shared-eks
      - --role
      - arn:aws:iam::111111111111:role/aws-reserved/sso.amazonaws.com/us-west-2/AWSReservedSSO_AdministratorAccess_67d1da2aa3076dfb
      command: aws

aws-auth

mapRoles: |
    - "groups":
      - "system:bootstrappers"
      - "system:nodes"
      "rolearn": "arn:aws:iam::111111111111:role/team-shared-eks20210110051740674200000009"
      "username": "system:node:{{EC2PrivateDNSName}}"
    - "groups":
      - "system:master"
      "rolearn": "arn:aws:iam::111111111111:role/team-saml-devops"
      "username": "team-devops"
    - "groups":
      - "system:master"
      "rolearn": "arn:aws:iam::111111111111:role/aws-reserved/sso.amazonaws.com/us-west-2/AWSReservedSSO_AdministratorAccess_67d1da2aa3076dfb"
      "username": "team-sso-devops"

clusterrole binding for team-sso-devops user:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: "2021-01-11T01:37:51Z"
  name: team:sso:devops
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: team-sso-devops
  namespace: default
DmitrySemenov
  • 9,204
  • 15
  • 76
  • 121
  • what is the namespace of your current context `kubectl config current-context ? Is it the same context/namespace you are trying to reach? – yosefrow Jan 15 '21 at 03:11

3 Answers3

16

Option #1 - Try removing aws-reserved/sso.amazonaws.com/$region/ from the role_arn

Option #2 - Use aws-iam-authenticator, the official docs provide a thorough example of how to use SSO and kubectl (kubeconfig)

luk2302
  • 55,258
  • 23
  • 97
  • 137
Meir Gabay
  • 2,870
  • 1
  • 24
  • 34
7

client voted another post as an answer, but didn't post the methodology used. For others who may run into this, I', posting what I got to work under the same scenario:

Following tips from this blog:

  1. use AWS console or a similar method to verify the role and get it's arn
  2. modify the supplied arn, trimming out excess path information. In my case I had to remove aws-reserved/sso.amazonaws.com/us-west-2/ from the arn. The goal is to make the arn "read" like a traditional role arn ex: arn:aws:iam::123456789012:role/RoleName
  3. finally, update the aws-auth map roles to use this new arn along with modifying the username to contain the session name, like so:
- "groups":
  - "system:masters"
  "rolearn": "arn:aws:iam::123456789012:role/AWSReservedSSO_AWSAdministratorAccess_randomdigits"
  "username": "AWSAdministratorAccess:{{SessionName}}"

Reminder that this mapRoles entry is in addition to the existing roles and not to delete the bootstrapper entry.

I hope this helps others!

  • 1
    Hello... Inspect // Watch ... Your answer is poorly elaborated. It is useful to insert an effective response, with codes and references. Concluding a practical and efficient solution. This platform is not just any forum. We are the largest help and support center for other programmers and developers in the world. Review the terms of the community and learn how to post; – Paulo Boaventura Mar 24 '21 at 19:36
0

as @Meir mentioned, no need to update the kube config,

  1. update configmap aws-auth with the following: mapRoles: |

    • "groups":
      • "system:masters" "rolearn": "arn:aws:iam::{{AWS_ACCOUNT}}:role/{{AWSSSO_ROLE}}" "username": "admin:{{SessionName}}"
  2. using temporary AWS Key, Secret & Session Token in env

and you should ready to go.