1

I have tried connecting unencrypted EFS and it is working fine, but with encrypted EFS, the pod is throwing below error:

  Normal   Scheduled    10m                    default-scheduler                                     Successfully assigned default/jenkins-efs-test-8ffb4dc86-xnjdj to ip-10-100-4-249.ap-south-1.compute.internal
  Warning  FailedMount  6m33s (x2 over 8m49s)  kubelet, ip-10-100-4-249.ap-south-1.compute.internal  Unable to attach or mount volumes: unmounted volumes=[jenkins-home], unattached volumes=[sc-config-volume tmp jenkins-home jenkins-config secrets-dir plugins plugin-dir jenkins-efs-test-token-7nmkz]: timed out waiting for the condition
  Warning  FailedMount  4m19s                  kubelet, ip-10-100-4-249.ap-south-1.compute.internal  Unable to attach or mount volumes: unmounted volumes=[jenkins-home], unattached volumes=[plugins plugin-dir jenkins-efs-test-token-7nmkz sc-config-volume tmp jenkins-home jenkins-config secrets-dir]: timed out waiting for the condition
  Warning  FailedMount  2m2s                   kubelet, ip-10-100-4-249.ap-south-1.compute.internal  Unable to attach or mount volumes: unmounted volumes=[jenkins-home], unattached volumes=[tmp jenkins-home jenkins-config secrets-dir plugins plugin-dir jenkins-efs-test-token-7nmkz sc-config-volume]: timed out waiting for the condition
  Warning  FailedMount  35s (x13 over 10m)     kubelet, ip-10-100-4-249.ap-south-1.compute.internal  MountVolume.SetUp failed for volume "efs-pv" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Internal desc = Could not mount "" at "/var/lib/kubelet/pods/354800a1-dcf5-4812-aa91-0e84ca6fba59/volumes/kubernetes.io~csi/efs-pv/mount": mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t efs /var/lib/kubelet/pods/354800a1-dcf5-4812-aa91-0e84ca6fba59/volumes/kubernetes.io~csi/efs-pv/mount
Output: mount: /var/lib/kubelet/pods/354800a1-dcf5-4812-aa91-0e84ca6fba59/volumes/kubernetes.io~csi/efs-pv/mount: can't find in /etc/fstab.

What am I missing here?

Jonas
  • 121,568
  • 97
  • 310
  • 388
Sreejith
  • 434
  • 3
  • 19

2 Answers2

1

You didn't specify what the K8s manifests are or any configuration. There shouldn't be any difference between encrypted and non-encrypted volumes when it comes to mounting from the client-side. In essence, AWS manages the encryption keys for you using KMS.

The error you are seeing is basically because the mount command is not specifying the mount point so there must have been some other default configuration from the K8s side that you are changing when using un-encrypted EFS volumes. Also, is the EFS Mount helper available on the Kubernetes node where you are trying to mount the EFS Volume?

✌️

Rico
  • 58,485
  • 12
  • 111
  • 141
  • I didn't changed any mount configurations in yaml files for encrypted volume, and I have tried with efs provisioner and it is working fine. – Sreejith Aug 19 '20 at 05:57
0

Check the logs of the cloud init agent (/var/logs/cloud-init.log and /var/logs/cloud-init-output.log) if the EFS filesystem mount does not work as expected. Check /etc/fstab file.

Try to update efs-csi-node daemonset from amazon/aws-efs-csi-driver:v0.3.0 image to amazon/aws-efs-csi-driver:latest.

Here is example mounting EFS script. Compare it to yours and note that:

Dependencies for this script:

  • Default ECS cluster configuration (Amazon Linux ECS AMI).
  • The ECS instance must have a IAM role that gives it at least read access to EFS (in order to locate the EFS filesystem ID).
  • The ECS instance must be in a security group that allows port tcp/2049 (NFS) inbound/outbound.
  • The security group that the ECS instance belongs to must be associated with the target EFS filesystem.

Notes on this script:

  • The EFS mount path is calculated on a per-instance basis as the EFS endpoint varies depending upon the region and availability zone where the instance is launched.
  • The EFS mount is added to /etc/fstab so that if the ECS instance is rebooted, the mount point will be re-created.
  • Docker is restarted to ensure it correctly detects the EFS filesystem mount.

Restart docker after mounting EFS with command: $ service docker restart. At the end try to reboot the EKS worker node.

Take a look: mounting-efs-in-eks-cluster-example-deployment-fails, efs-provisioner, dynamic-ip-in-etc-fstab.

Malgorzata
  • 6,409
  • 1
  • 10
  • 27