3

Problem statement:

Azure disk dynamic Persistent Volume Claim: Mount failed with timeout expired. Pod is in “ContainerCreating” status for ever.

kubectl describe pod myPod gives following information:

Warning  FailedMount  1m (x5 over 12m)   kubelet, k8-node-2  Unable to mount volumes for pod "mongodb-76bd56459f-hxjdc_kubeapps(8189f2e4-0017-11e8-82ac-000d3aa33484)": timeout expired waiting for volumes to attach/mount for pod "kubeapps"/"mongodb-76bd56459f-hxjdc". list of unattached/unmounted volumes=[data]
Warning  FailedMount  21s (x8 over 12m)  kubelet, k8-node-2  (combined from similar events): MountVolume.SetUp failed for volume "pvc-516aeece-ff9d-11e7-82ac-000d3aa33484" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/8189f2e4-0017-11e8-82ac-000d3aa33484/volumes/kubernetes.io~azure-disk/pvc-516aeece-ff9d-11e7-82ac-000d3aa33484 --scope -- mount -t ext4 -o bind /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m3083936425 /var/lib/kubelet/pods/8189f2e4-0017-11e8-82ac-000d3aa33484/volumes/kubernetes.io~azure-disk/pvc-516aeece-ff9d-11e7-82ac-000d3aa33484<br> Output: Running scope as unit run-rf9126bab6fba44d9a499370260ed5fe8.scope. mount: special device /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m3083936425 does not exist

Kubernetes Cluster info:

It is a bare-metal installation with One Master and Two minions. All three Ubuntu 16.04 LTS VMs are on Azure. Cluster is created with “kubeadm”.

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:52:23Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

Further useful information from my own investigation:

1. PVC and PV are created and bound. See below:

~$ kubectl -n kubeapps get pvc 
NAME|STATUS|VOLUME|CAPACITY|ACCESS|MODES|STORAGECLASS|AGE
mongodb-data|Bound|pvc-516aeece-ff9d-11e7-82ac-000d3aa33484|8Gi||RWO|k8storage   |14h

~$ kubectl -n kubeapps get pv
NAME|CAPACITY|ACCESS|MODES|RECLAIM POLICY|STATUS|CLAIM|STORAGECLASS|REASON|AGE
pvc-516aeece-ff9d-11e7-82ac-000d3aa33484|8Gi|RWO|Retain|Bound|kubeapps/mongodb-data|k8storage| |14h

2. Azure managed disks are automatically created and attached to the appropriate node. See screenshots from Azure portal: enter image description here

Thanks in advance!!!

nickgryg
  • 25,567
  • 5
  • 77
  • 79
Arindam
  • 320
  • 2
  • 12

2 Answers2

1

Based on the information there, you can proceed to the kubelet point of view of the issue ("kubectl events", "journalctl -u kubelet") as well as possible operating system issues, including interaction with Azure "journalctl -p 3" - the "-p 3" is to filter only for major issues (-p can go from 0 to 7).

Jack B
  • 13
  • 5
0

You have to allow this Account provisioning storage.

You can do this here: Resource Provider status

Simon Knott
  • 1
  • 1
  • 1