70

I created a PersistentVolume sourced from a Google Compute Engine persistent disk that I already formatted and provision with data. Kubernetes says the PersistentVolume is available.

kind: PersistentVolume
apiVersion: v1
metadata:
  name: models-1-0-0
  labels:
    name: models-1-0-0
spec:
  capacity:
    storage: 200Gi
  accessModes:
    - ReadOnlyMany
  gcePersistentDisk:
    pdName: models-1-0-0
    fsType: ext4
    readOnly: true

I then created a PersistentVolumeClaim so that I could attach this volume to multiple pods across multiple nodes. However, kubernetes indefinitely says it is in a pending state.

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: models-1-0-0-claim
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 200Gi
  selector:
    matchLabels:
      name: models-1-0-0

Any insights? I feel there may be something wrong with the selector...

Is it even possible to preconfigure a persistent disk with data and have pods across multiple nodes all be able to read from it?

Akash Krishnan
  • 1,819
  • 1
  • 14
  • 16

12 Answers12

84

I quickly realized that PersistentVolumeClaim defaults the storageClassName field to standard when not specified. However, when creating a PersistentVolume, storageClassName does not have a default, so the selector doesn't find a match.

The following worked for me:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: models-1-0-0
  labels:
    name: models-1-0-0
spec:
  capacity:
    storage: 200Gi
  storageClassName: standard
  accessModes:
    - ReadOnlyMany
  gcePersistentDisk:
    pdName: models-1-0-0
    fsType: ext4
    readOnly: true
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: models-1-0-0-claim
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 200Gi
  selector:
    matchLabels:
      name: models-1-0-0
Akash Krishnan
  • 1,819
  • 1
  • 14
  • 16
  • 21
    run `kubectl describe pvc` to confirm if this is the bug, you'll get `"Cannot bind to requested volume "YOUR_PV_NAME": storageClasseName does not match"` – s12chung Oct 04 '18 at 02:27
  • had same issue. it's strange that k8 dashboard just stays pending and not report the error ! – nir Dec 11 '18 at 00:51
  • 2
    +1 Also had this issue on an AWS EC2 cluster setup with kops. To get the PV/PVC connected properly, I had to add `storageClassName: gp2` to both. There are some related docs on [setting a storage class](https://docs.aws.amazon.com/eks/latest/userguide/storage-classes.html) for your AWS cluster and [types of EBS volumes available](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) For some reason, I wasn't getting the error message noted by @s12chung – wronk Apr 05 '19 at 18:25
17

With dynamic provisioning, you shouldn't have to create PVs and PVCs separately. In Kubernetes 1.6+, there are default provisioners for GKE and some other cloud environments, which should let you just create a PVC and have it automatically provision a PV and an underlying Persistent Disk for you.

For more on dynamic provisioning, see:

https://kubernetes.io/blog/2017/03/dynamic-provisioning-and-storage-classes-kubernetes/

Viktor Hedefalk
  • 3,572
  • 3
  • 33
  • 48
Anirudh Ramanathan
  • 46,179
  • 22
  • 132
  • 191
17

Had the same issue but it was another reason that's why I am sharing it here to help community.

If you have deleted PersistentVolumeClaim and then re-create it again with the same definition, it will be Pending forever, why?

persistentVolumeReclaimPolicy is Retain by default in PersistentVolume. In case we have deleted PersistentVolumeClaim, the PersistentVolume still exists and the volume is considered released. But it is not yet available for another claim because the previous claimant's data remains on the volume. so you need to manually reclaim the volume with the following steps:

  1. Delete the PersistentVolume (associated underlying storage asset/resource like EBS, GCE PD, Azure Disk, ...etc will NOT be deleted, still exists)

  2. (Optional) Manually clean up the data on the associated storage asset accordingly

  3. (Optional) Manually delete the associated storage asset (EBS, GCE PD, Azure Disk, ...etc)

If you still need the same data, you may skip cleaning and deleting associated storage asset (step 2 and 3 above), so just simply re-create a new PersistentVolume with same storage asset definition then you should be good to create PersistentVolumeClaim again.

One last thing to mention, Retain is not the only option for persistentVolumeReclaimPolicy, below are some other options that you may need to use or try based on use-case scenarios:

Recycle: performs a basic scrub on the volume (e.g., rm -rf //*) - makes it available again for a new claim. Only NFS and HostPath support recycling.

Delete: Associated storage asset such as AWS EBS, GCE PD, Azure Disk, or OpenStack Cinder...etc volume is deleted

For more information, please check kubernetes documentation.

Still need more clarification or have any questions, please don't hesitate to leave a comment and I will be more than happy to clarify and assist.

Muhammad Soliman
  • 21,644
  • 6
  • 109
  • 75
  • Does just deleting the data from PV will resolve this issue? I can not delete PV and create new PV due to some restrictions. But as i have deleted PVC, I am not able to create a new one and its showing in pending status – iRunner Aug 02 '20 at 19:26
  • @iRunner you don't have to delete data itself but you still have to delete PV (logical volume not actual data) - (don't worry as associated underlying storage will NOT be deleted if you decide to delete PV) – Muhammad Soliman Aug 03 '20 at 14:50
  • @MuhammadSoliman if i don't have access to the PV, because you need cluster-admin rights to delete, what should be done then?. The behaviour i am looking for is; if i delete the pvc then i want to make the pv available again for another claim. Or do you suggest to just delete the pod and leave the pvc intact so it can be reused in the pod again? – zaf187 Dec 17 '20 at 15:21
  • What if I deleted the namespace that the sc/pv/pvc was defined already? And run scripts again for secret and volume yaml file? My situation is KinD cluster, azure file share. Same scripts work for aks cluster – soMuchToLearnAndShare Dec 31 '22 at 14:18
13

If you're using Microk8s, you have to enable storage before you can start a PersistentVolumeClaim successfully.

Just do:

microk8s.enable storage

You'll need to delete your deployment and start again.

You may also need to manually delete the "pending" PersistentVolumeClaims because I found that uninstalling the Helm chart which created them didn't clear the PVCs out.

You can do this by first finding a list of names:

kubectl get pvc --all-namespaces

then deleting each name with:

kubectl delete pvc name1 name2 etc...

Once storage is enabled, reapplying your deployment should get things going.

LondonRob
  • 73,083
  • 37
  • 144
  • 201
8

I was facing the same problem, and realise that k8s actually does a just-in-time provision, i.e.

  • When a pvc is created, it stays in PENDING state, and no corresponding pv is created.
  • The pvc & pv (EBS volume) are created only after you have created a deployment which uses the pvc.

I am using EKS with kubernetes version 1.16 and the behaviour is controlled by StorageClass Volume Binding Mode.

Eric Xin Zhang
  • 1,230
  • 15
  • 20
3

I had same problem. My PersistentVolumeClaim yaml was originally as follows:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc
spec:
  storageClassName: “”
  accessModes:
    – ReadWriteOnce 
  volumeName: pv
  resources:
    requests:
      storage: 1Gi

and my pvc status was:

enter image description here

after remove volumeName :

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc
spec:
  storageClassName: “”
  accessModes:
    – ReadWriteOnce 
  resources:
    requests:
      storage: 1Gi

enter image description here

M.Namjo
  • 374
  • 2
  • 13
2

When you want to bind manually a PVC to a PV with an existing disk, the storageClassName should not be specified... but... the cloud provider has set by default the "standard" StorageClass making it always entered whatever you try when patching the PVC/PV.

You can check your provider set it as default when doing kubectl get storageclass (it will be written "(default")).

To fix this the best is to get your existing StorageClass YAML and add this annotation:

  annotations:
    storageclass.kubernetes.io/is-default-class: "false"

Apply and good :)

Thomas Ramé
  • 438
  • 4
  • 10
1

I've seen this behaviour in microk8s 1.14.1 when two PersistentVolumes have the same value for spec/hostPath/path, e.g.

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-name
  labels:
    type: local
    app: app
spec:
  storageClassName: standard
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/k8s-app-data"

It seems that microk8s is event-based (which isn't necessary on a one-node cluster) and throws away information about any failing operations resulting in unnecessary horrible feedback for almost all failures.

Kalle Richter
  • 8,008
  • 26
  • 77
  • 177
1

I had this problem with helmchart of the apache airflow(stable), setting storageClass to azurefile helped. What you should do in such cases with the cloud providers? Just search for the storage classes that support the needed access mode. ReadWriteMany means that SIMULTANEOUSLY many processes will read and write to the storage. In this case(azure) it is azurefile.

path: /opt/airflow/logs

  ## configs for the logs PVC
  ##
  persistence:
    ## if a persistent volume is mounted at `logs.path`
    ##
    enabled: true

    ## the name of an existing PVC to use
    ##
    existingClaim: ""

    ## sub-path under `logs.persistence.existingClaim` to use
    ##
    subPath: ""

    ## the name of the StorageClass used by the PVC
    ##
    ## NOTE:
    ## - if set to "", then `PersistentVolumeClaim/spec.storageClassName` is omitted
    ## - if set to "-", then `PersistentVolumeClaim/spec.storageClassName` is set to ""
    ##
    storageClass: "azurefile"

    ## the access mode of the PVC
    ##
    ## WARNING:
    ## - must be: `ReadWriteMany`
    ##
    ## NOTE:
    ## - different StorageClass support different access modes:
    ##   https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
    ##
    accessMode: ReadWriteMany

    ## the size of PVC to request
    ##
    size: 1Gi
Edik Mkoyan
  • 309
  • 2
  • 17
1

Am using microk8s

Fixed the problem by running the commands below

systemctl start open-iscsi.service

(had open-iscsi installed earlier using apt install open-iscsi but had not started it)

Then enabled storage as follows

microk8s.enable storage

Then, deleted the Stateful Sets and the pending Persistence Volume Claims from Lens so I can start over.

Worked well after that.

wwmwabini
  • 75
  • 6
0

I faced the same issue in which the PersistentVolumeClaim was in Pending Phase indefinitely, I tried providing the storageClassName as 'default' in PersistentVolume just like I did for PersistentVolumeClaim but it did not fix this issue.

I made one change in my persistentvolume.yml and moved the PersistentVolumeClaim config on top of the file and then PersistentVolume as the second config in the yml file. It has fixed that issue.

We need to make sure that PersistentVolumeClaim is created first and the PersistentVolume is created afterwards to resolve this 'Pending' phase issue.

I am posting this answer after testing it for a few times, hoping that it might help someone struggling with it.

Adnan Raza
  • 105
  • 1
  • 9
  • I found that the overly long Pending phase is not caused by the creation order of the two components. I tried as you suggested and it assigned it about 2 seconds faster. The root cause in my case was to replace ReadWriteMany with ReadWriteOnce when using storageClassName: storage – Dave Jan 01 '20 at 20:25
-4

Make sure your VM also has enough disk space.