1
Warning FailedMount 23m (x55 over 3h) kubelet, ip-172-31-3-191.us-east-2.compute.internal Unable to attach or mount volumes: unmounted volumes=[mysql-persistent-storage], unattached volumes=[mysql-persistent-storage default-token-6vkr2]: timed out waiting for the condition
Warning FailedMount 4m (x105 over 3h7m) kubelet, ip-172-31-3-191.us-east-2.compute.internal (combined from similar events): MountVolume.SetUp failed for volume "pv" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/408ffa6b-1f64-4f1a-adfd-01b77ad7b886/volumes/kubernetes.ioaws-ebs/pv --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-011d7bb42da888b82 /var/lib/kubelet/pods/408ffa6b-1f64-4f1a-adfd-01b77ad7b886/volumes/kubernetes.ioaws-ebs/pv
Output: Running scope as unit run-9240.scope.
mount: /var/lib/kubelet/pods/408ffa6b-1f64-4f1a-adfd-01b77ad7b886/volumes/kubernetes.io~aws-ebs/pv: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-011d7bb42da888b82 does not exist.
Warning FailedAttachVolume 2m38s (x92 over 3h7m) attachdetach-controller AttachVolume.NewAttacher failed for volume "pv" : Failed to get AWS Cloud Provider. GetCloudProvider returned instead

I am running kubernetes cluster on AWS EC2 machines.

When I am trying to attach EBS volume I am getting the above error.

Yasen
  • 4,241
  • 1
  • 16
  • 25
Suresh
  • 51
  • 3
  • 1
    You need to provide more information. Which kubernetes version did you use? What exactly did you do when you got this error? Please attach your yaml files. – Mikołaj Głodziak Sep 10 '21 at 07:42
  • Please clarify your specific problem or provide additional details to highlight exactly what you need. As it's currently written, it's hard to tell exactly what you're asking. – Community Sep 15 '21 at 06:39

1 Answers1

1

There are at least three possible solutions:

  • check your k8s version; maybe you'll want to update it;
  • install NFS on your infrastructure;
  • fix inbound CIDR rule

k8s version

There is a known issue: MountVolume.SetUp failed for volume pvc mount failed: exit status 32 · Issue #77916 · kubernetes/kubernetes

And it's fixed in #77663

So, check your k8s version

kubectl version
kubeadm version

Install NFS on your infrastructure

There are three answers where installing NFS helped in cases like yours:

A: 49571145 A: 52457325 A: 55922445

Fix inbound CIDR rule

One of solutions is to add cluster's CIDR to

See A: 61868673

I was facing this issue because of the Kubernetes cluster node CIDR range was not present in the inbound rule of Security Group of my AWS EC2 instance(where my NFS server was running )

Solution: Added my Kubernetes cluser Node CIDR range to inbound rule of Security Group.

Yasen
  • 4,241
  • 1
  • 16
  • 25