20

Don't know if this is an error from AWS or something. I created an IAM user and gave it full admin policies. I then used this user to create an EKS cluster using the eksctl CLI but when I logging to AWS console with the root user I got the below error while trying to access the cluster nodes.

Your current user or role does not have access to Kubernetes objects on this EKS cluster This may be due to the current user or role not having Kubernetes RBAC permissions to describe cluster resources or not having an entry in the cluster’s auth config map.

I have these questions

  1. Does not the root user have full access to view every resource from the console?
  2. If the above is true, does it mean when I create a resource from the CLI I must login with the same user to view it?
  3. Or is there way I could attach policies to the root user? Didn't see anything like in the console.

AWS itself does not recommend creating access keys for root user and using it for programmable access, so I'm so confused right now. Someone help

All questions I have seen so far and the link to the doc here are talking about a user or role created in the AWS IAM and not the root user.

benyusouf
  • 325
  • 2
  • 6
  • 17

7 Answers7

39

If you're logged in with the root user and get this error, run the below command to edit the aws-auth configMap:

kubectl edit configmap aws-auth -n kube-system

Then go down to mapUsers and add the following (replace [account_id] with your Account ID)

mapUsers: |
  - userarn: arn:aws:iam::[account_id]:root
    groups:
    - system:masters
Idrizi.A
  • 9,819
  • 11
  • 47
  • 88
  • 1
    My EKS cluster is created by a Cloudformation template, not through any CLI and I am getting the exact same issue. Where do I run the command `kubectl edit configmap aws-auth -n kube-system` ? – Sagar Oct 28 '22 at 17:43
  • 1
    Worth mentioning that you probably won't have the "mapUsers" entry, so you'll need to add it in the same identation level as "mapRoles". – Alexandre T. Jan 21 '23 at 13:15
9

From what I've understood, EKS manages user and role permissions through a ConfigMap called aws-auth that resides in the kube-system namespace. So despite being logged in with an AWS user with full administrator access to all services, EKS will still limit your access in the console as it can't find the user or role in its authentication configuration.

When I had this issue, I solved it by editing the aws-auth configmap by adding the following under mapRoles:

- "groups":
  - "system:masters"
  "rolearn": "arn:aws:iam::<aws-account-id>:role/<aws-role-name>"
  "username": "<aws-username>"

Where aws-role-name is the role name shown when logged into the aws console in the top right corner.

I guess this could also done with the eksctl-utility as documented here: https://eksctl.io/usage/iam-identity-mappings/

So, maybe something like:

eksctl create iamidentitymapping --cluster <clusterName> --region=<region> --arn arn:aws:iam::<aws-account-id>:role/<aws-role-name> --group system:masters --username <aws-username>

Using eksctl is probably a better way of doing it, though I haven't tried it myself.

Hans Melby
  • 91
  • 1
  • I used the second option, I could see the configmap but it didn't solve the root user permission error – benyusouf Jan 21 '22 at 13:44
  • 1
    My EKS cluster is created by a Cloudformation template, not through any CLI and I am getting the exact same issue. Where do I run the command to edit aws-auth ? – Sagar Oct 28 '22 at 17:44
3

I had this issue today, and solved it by combining answers here. The aws-auth config after it worked looks like this:

apiVersion: v1
data:
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::671177010163:role/eksctl-manu-eks-new2-nodegroup-ng-NodeInstanceRole-1NYUHVMYFP2TK
      username: system:node:{{EC2PrivateDNSName}}
  mapUsers: "- groups: \n  - system:masters\n  userarn: arn:aws:iam::671177010163:root\n"
kind: ConfigMap
metadata:
  creationTimestamp: "2022-02-13T11:03:30Z"
  name: aws-auth
  namespace: kube-system
  resourceVersion: "11362"
  uid: ac36a1d9-76bc-40dc-95f0-b1e7934357
2

Looks like IAM user that you're signed into the AWS Management Console with (or role that you switched to after signing in) doesn't have the necessary permissions.

Here is pure AWS way to solve this (no manually config editing). Recommended by AWS:

https://docs.aws.amazon.com/eks/latest/userguide/view-kubernetes-resources.html#view-kubernetes-resources-permissions

  1. Create IAM policy:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "eks:ListFargateProfiles",
                "eks:DescribeNodegroup",
                "eks:ListNodegroups",
                "eks:ListUpdates",
                "eks:AccessKubernetesApi",
                "eks:ListAddons",
                "eks:DescribeCluster",
                "eks:DescribeAddonVersions",
                "eks:ListClusters",
                "eks:ListIdentityProviderConfigs",
                "iam:ListRoles"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "ssm:GetParameter",
            "Resource": "arn:aws:ssm:*:111122223333:parameter/*"
        }
    ]
}  
  1. Create IAM role and attach policy to role. Attach policy to IAM user.
  2. Create a Kubernetes rolebinding or clusterrolebinding. To view all K8s resources in EKS run
kubectl apply -f https://s3.us-west-2.amazonaws.com/amazon-eks/docs/eks-console-full-access.yaml

Note that this config uses group name eks-console-dashboard-full-access-group (you'll use it further).

  1. Add a mapping for a role (from the step 1).
eksctl create iamidentitymapping \
    --cluster my-cluster \
    --region=region-code \
    --arn arn:aws:iam::111122223333:role/my-console-viewer-role \
    --group eks-console-dashboard-full-access-group \
    --no-duplicate-arns

Replace here cluster name, region and role ARN.

  1. Add a mapping for a user (look step 1).
eksctl create iamidentitymapping \
    --cluster my-cluster \
    --region=region-code \
    --arn arn:aws:iam::111122223333:user/my-user \
    --group eks-console-dashboard-full-access-group \
    --no-duplicate-arns

That all. You may view the mappings in the ConfigMap now:

eksctl get iamidentitymapping --cluster my-cluster --region=region-code

You have to see both role and user in mapping.

After that you'll be able to see K8s resources in AWS Management console.

Ihor Konovalenko
  • 1,298
  • 2
  • 16
  • 21
2

grant RBAC permission for IAM principle (allow it to view eks)

Step One

method 1: aws cli

  • create policy to include the necessary permissions for a principal to view Kubernetes resources for all clusters in your account. replace the following 111122223333 with your aws account id
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "eks:ListFargateProfiles",
                "eks:DescribeNodegroup",
                "eks:ListNodegroups",
                "eks:ListUpdates",
                "eks:AccessKubernetesApi",
                "eks:ListAddons",
                "eks:DescribeCluster",
                "eks:DescribeAddonVersions",
                "eks:ListClusters",
                "eks:ListIdentityProviderConfigs",
                "iam:ListRoles"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "ssm:GetParameter",
            "Resource": "arn:aws:ssm:*:111122223333:parameter/*"
        }
    ]
}
  • create EKS connector IAM role with its policy. AmazonEKSConnectorAgentRole:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ssm.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

AmazonEKSConnectorAgentPolicy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "SsmControlChannel",
            "Effect": "Allow",
            "Action": [
                "ssmmessages:CreateControlChannel"
            ],
            "Resource": "arn:aws:eks:*:*:cluster/*"
        },
        {
            "Sid": "ssmDataplaneOperations",
            "Effect": "Allow",
            "Action": [
                "ssmmessages:CreateDataChannel",
                "ssmmessages:OpenDataChannel",
                "ssmmessages:OpenControlChannel"
            ],
            "Resource": "*"
        }
    ]
}
  • Create the Amazon EKS Connector agent role using the trust policy and policy you created in the previous list items.
aws iam create-role \
     --role-name AmazonEKSConnectorAgentRole \
     --assume-role-policy-document file://eks-connector-agent-trust-policy.json
  • Attach the policy to your Amazon EKS Connector agent role.
aws iam put-role-policy \
     --role-name AmazonEKSConnectorAgentRole \
     --policy-name AmazonEKSConnectorAgentPolicy \
     --policy-document file://eks-connector-agent-policy.json

method 2: terraform

//https://docs.aws.amazon.com/eks/latest/userguide/view-kubernetes-resources.html#view-kubernetes-resources-permissions
//create EKSViewResourcesPolicy
resource "aws_iam_policy" "eks_view_resources_policy" {
  name        = "EKSViewResourcesPolicy"
  description = "Policy to allow a principal to view Kubernetes resources for all clusters in the account"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Action = [
          "eks:ListFargateProfiles",
          "eks:DescribeNodegroup",
          "eks:ListNodegroups",
          "eks:ListUpdates",
          "eks:AccessKubernetesApi",
          "eks:ListAddons",
          "eks:DescribeCluster",
          "eks:DescribeAddonVersions",
          "eks:ListClusters",
          "eks:ListIdentityProviderConfigs",
          "iam:ListRoles"
        ]
        Resource = "*"
      },
      {
        Effect   = "Allow"
        Action   = "ssm:GetParameter"
        Resource = "arn:aws:ssm:*:${var.aws_account_id}:parameter/*"
      }
    ]
  })
}


//https://docs.aws.amazon.com/eks/latest/userguide/connector_IAM_role.html
// create AmazonEKSConnectorAgentRole and AmazonEKSConnectorAgentPolicy
resource "aws_iam_role" "eks_connector_agent_role" {
  name = "AmazonEKSConnectorAgentRole"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Principal = {
          Service = "ssm.amazonaws.com"
        }
        Action = "sts:AssumeRole"
      }
    ]
  })
}

resource "aws_iam_policy" "eks_connector_agent_policy" {
  name = "AmazonEKSConnectorAgentPolicy"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Sid    = "SsmControlChannel"
        Effect = "Allow"
        Action = [
          "ssmmessages:CreateControlChannel"
        ]
        Resource = "arn:aws:eks:*:*:cluster/*"
      },
      {
        Sid    = "ssmDataplaneOperations"
        Effect = "Allow"
        Action = [
          "ssmmessages:CreateDataChannel",
          "ssmmessages:OpenDataChannel",
          "ssmmessages:OpenControlChannel"
        ]
        Resource = "*"
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "eks_cluster_policy_attachment" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = aws_iam_role.eks_connector_agent_role.name
}

resource "aws_iam_role_policy_attachment" "eks_connector_agent_custom_policy_attachment" {
  policy_arn = aws_iam_policy.eks_connector_agent_policy.arn
  role       = aws_iam_role.eks_connector_agent_role.name
}

Step Two

  • Once you create EKS cluster, update the kubeconfig
aws eks update-kubeconfig --region <your region name> --name <your eks cluster name>

  • Create a Kubernetes rolebinding or clusterrolebinding that is bound to a Kubernetes role or clusterrole that has the necessary permissions to view the Kubernetes resources. -- View Kubernetes resources in all namespaces
kubectl apply -f https://s3.us-west-2.amazonaws.com/amazon-eks/docs/eks-console-full-access.yaml

-- View Kubernetes resources in a specific namespace

kubectl apply -f https://s3.us-west-2.amazonaws.com/amazon-eks/docs/eks-console-restricted-access.yaml

or using customized by updating the downloaed file

curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/docs/eks-console-full-access.yaml

kubectl apply -f rbac.yaml

rbac.yaml:

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: reader
rules:
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: reader
subjects:
  - kind: Group
    name: reader
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: reader
  apiGroup: rbac.authorization.k8s.io

Step Three

  • Map the IAM principal to the Kubernetes user or group in the aws-auth ConfigMap

kubectl edit -n kube-system configmap/aws-auth

add:

mapUsers: |
  - groups:
    - reader
    userarn: arn:aws:iam::1111111111:user/admin
    username: admin
mapRoles: |
  - groups:
    - reader
    rolearn: arn:aws:iam::11111111:role/AmazonEKSConnectorAgentRole
    username: AmazonEKSConnectorAgentRole
  • This is a great detailed answer and helped me thanks. The issue I had in particular was because I had a user from one AWS account that had switched roles (assuming a role) in another AWS account. I added the rolearn of the assumed role with the groups `system:master` ` - rolearn: arn:aws:iam::1111111111:role/DevelopAccountAccessRole groups: - system:masters ` – mrjamesmyers Jul 28 '23 at 23:51
1

The IAM user you use to log into the EKS console has to be given explicit permissions, via Kubernetes Role-based access control (RBAC), to access EKS resources even if the IAM user has "root" privileges.

To fix this issue, make sure you have kubectl installed on your system.

kubectl version

If you don't have kubectl installed on your system, you can install it by using brew on macOS:

brew install kubectl

Otherwise, follow the instructions here for your operating system i.e. Linux, .

Next step is to use the AWS CLI to authenticate kubectl:

aws eks update-kubeconfig --name MyEksCluster --region us-west-2 --role-arn arn:aws:iam::[accountNumber]:role/[EksMastersRole]

If you used the standard CDK constructs to deploy your EKS cluster, this command will be displayed in the CDK output for you to copy. Otherwise, you can find the parts you need (i.e. name, region, role-arn) to construct it by logging into the EKS console.

The final step is to edit your aws-auth config map, to do so:

kubectl edit configmap aws-auth -n kube-system

And update the mapUsers section as follows.

  mapUsers: |
    - userarn: arn:aws:iam::[accountId]:user/[username]
      groups:
      - system:masters

You can find your userarn by logging into the AWS console then go to Users > [username]. You will find it under the Summary section.

Now you can navigate to the EKS and it should be working as expected.

jmurzy
  • 3,816
  • 1
  • 17
  • 11
0

For people using terraform you can use the terraform-aws-modules/eks/aws module to create you cluster: https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest?tab=inputs

then use the arguments manage_aws_auth_configmap and aws_auth_users like:

module "eks_cluster" {
source = "terraform-aws-modules/eks/aws"
...
manage_aws_auth_configmap = true
aws_auth_users            = ["arn:aws:iam::1111111111:root"]
...
}
Geoffrey
  • 69
  • 1
  • 2
  • 5