13

I tested kubernetes deployment with EBS volume mounting on AWS cluster provisioned by kops. This is deployment yml file:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: helloworld-deployment-volume
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: helloworld
    spec:
      containers:
      - name: k8s-demo
        image: wardviaene/k8s-demo
        ports:
        - name: nodejs-port
          containerPort: 3000
        volumeMounts:
        - mountPath: /myvol
          name: myvolume
      volumes:
      - name: myvolume
        awsElasticBlockStore:
          volumeID: <volume_id>

After kubectl create -f <path_to_this_yml>, I got the following message in pod description:

Attach failed for volume "myvolume" : Error attaching EBS volume "XXX" to instance "YYY": "UnauthorizedOperation: You are not authorized to perform this operation. status code: 403

Looks like this is just a permission issue. Ok, I checked policy for node role IAM -> Roles -> nodes.<my_domain> and found that there where no actions which allow to manipulate volumes, there was only ec2:DescribeInstances action by default. So I added AttachVolume and DetachVolume actions:

    {
        "Sid": "kopsK8sEC2NodePerms",
        "Effect": "Allow",
        "Action": [
            "ec2:DescribeInstances",
            "ec2:AttachVolume",
            "ec2:DetachVolume"
        ],
        "Resource": [
            "*"
        ]
    },

And this didn't help. I'm still getting that error:

Attach failed for volume "myvolume" : Error attaching EBS volume "XXX" to instance "YYY": "UnauthorizedOperation: You are not authorized to perform this operation.

Am I missing something?

Artsiom Praneuski
  • 2,259
  • 16
  • 24

1 Answers1

34

I found a solution. It's described here.

In kops 1.8.0-beta.1, master node requires you to tag the AWS volume with:

KubernetesCluster: <clustername-here>

So it's necessary to create EBS volume with that tag by using awscli:

aws ec2 create-volume --size 10 --region eu-central-1 --availability-zone eu-central-1a --volume-type gp2 --tag-specifications 'ResourceType=volume,Tags=[{Key=KubernetesCluster,Value=<clustername-here>}]'

or you can tag it by manually in EC2 -> Volumes -> Your volume -> Tags

That's it.

EDIT:

The right cluster name can be found within EC2 instances tags which are part of cluster. Key is the same: KubernetesCluster.

Ivan Aracki
  • 4,861
  • 11
  • 59
  • 73
Artsiom Praneuski
  • 2,259
  • 16
  • 24
  • Thx for solution. But I'm getting new error. Unable to mount volumes for pod "XXX": timeout expired waiting for volumes to attach/mount for pod "XXX"/"XXX". list of unattached/unmounted volumes=[my-volume]. Did you experience the same error? – aprisniak Mar 10 '18 at 19:59
  • Interesting. No, I didnt encounter such issue, I even have no idea what it's related to. – Artsiom Praneuski Mar 11 '18 at 20:56
  • 1
    I found solution. Issue was due to xfs FS. This bug is related to the kubernetes issue. – aprisniak Mar 12 '18 at 18:32