I created an EKS cluster from an EC2 instance with my-cluster-role added to instance profile using aws cli:
aws eks create-cluster --name my-cluster --role-arn arn:aws:iam::012345678910:role/my-cluster-role --resources-vpc-config subnetIds=subnet-abcd123,subnet-wxyz345,securityGroupIds=sg-123456,endpointPublicAccess=false,endpointPrivateAccess=true
Kubeconfig file:
aws eks --region us-east-1 update-kubeconfig --name my-cluster
But while trying to access Kubernetes resources, I get below error:
[root@k8s-mgr ~]# kubectl get deployments --all-namespaces
Error from server (Forbidden): deployments.apps is forbidden: User "system:node:i-xxxxxxxx" cannot list resource "deployments" in API group "apps" at the cluster scope
Except for pods and services, no other resource is accessible.
Note that the cluster was created using the role my-cluster-role, as per the documentation, this role should have permissions to access the resources.
[root@k8s-mgr ~]# aws sts get-caller-identity
{
"Account": "012345678910",
"UserId": "ABCDEFGHIJKKLMNO12PQR:i-xxxxxxxx",
"Arn": "arn:aws:sts::012345678910:assumed-role/my-cluster-role/i-xxxxxxxx"
}
Edit: Tried creating ClusterRole and ClusterRoleBinding as suggested here: https://stackoverflow.com/a/70125670/7654693
Error:
[root@k8s-mgr]# kubectl apply -f access.yaml
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterroles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRole"
Name: "eks-console-dashboard-full-access-clusterrole", Namespace: ""
from server for: "access.yaml": clusterroles.rbac.authorization.k8s.io "eks-console-dashboard-full-access-clusterrole" is forbidden: User "system:node:i-xxxxxxxx" cannot get resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
Name: "eks-console-dashboard-full-access-binding", Namespace: ""
Below is my Kubeconfig:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CERT
server: SERVER ENDPOINT
name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
contexts:
- context:
cluster: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
user: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
current-context: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- my-cluster
command: aws