26

I setup a new k8s in a single node, which is tainted. But the PersistentVolume can not be created successfully, when I am trying to create a simple PostgreSQL.

There is some detail information below.


The StorageClass is copied from the official page: https://kubernetes.io/docs/concepts/storage/storage-classes/#local

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

The StatefulSet is:

kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
  name: postgres
spec:
  serviceName: postgres
  replicas: 1
  ...
  volumeClaimTemplates:
    - metadata:
        name: postgres-data
      spec:
        storageClassName: local-storage
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi

About the running StorageClass:

$ kubectl describe storageclasses.storage.k8s.io
Name:            local-storage
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"local-storage"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}

Provisioner:           kubernetes.io/no-provisioner
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     WaitForFirstConsumer
Events:                <none>

About the running PersistentVolumeClaim:

$ kubectl describe pvc
Name:          postgres-data-postgres-0
Namespace:     default
StorageClass:  local-storage
Status:        Pending
Volume:
Labels:        app=postgres
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Events:
  Type       Reason                Age                            From                         Message
  ----       ------                ----                           ----                         -------
  Normal     WaitForFirstConsumer  <invalid> (x2 over <invalid>)  persistentvolume-controller  waiting for first consumer to be created before binding

K8s versions:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Yan QiDong
  • 3,696
  • 1
  • 24
  • 25

9 Answers9

19

The app is waiting for the Pod, while the Pod is waiting for a PersistentVolume by a PersistentVolumeClaim. However, the PersistentVolume should be prepared by the user before using.

My previous YAMLs are lack of a PersistentVolume like this:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: postgres-data
  labels:
    type: local
spec:
  storageClassName: local-storage
  capacity:
    storage: 1Gi
  local:
    path: /data/postgres
  persistentVolumeReclaimPolicy: Retain
  accessModes:
    - ReadWriteOnce
  storageClassName: local-storage
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
          - key: app
            operator: In
            values:
              - postgres

The local path /data/postgres should be prepared before using. Kubernetes will not create it automatically.

Yan QiDong
  • 3,696
  • 1
  • 24
  • 25
  • 1
    Do you need nodeAffinity? – Justin Thomas Aug 14 '19 at 16:52
  • 1
    For a `local-storage`, I think `nodeAffinity` is necessary. I don't want the PersistentVolume to be scheduled to anywhere. – Yan QiDong Aug 15 '19 at 00:53
  • 1
    @YanQiDong I can't resolve it. I get `0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate` when describe pod postgre. Can you have me? – Akashii Nov 04 '19 at 09:00
  • A container can only be scheduled to the node where local storage PV is. – Yan QiDong Nov 14 '19 at 02:38
  • but what precisely solves the problem? – Pim van der Heijden Jan 12 '22 at 12:21
  • A PV - `PersistentVolume`. When you require any cluster resource which not exists, Pod will never be ready. – Yan QiDong Jan 14 '22 at 06:39
  • For `local-storage` is important to keep in mind the docs: `Local volumes do not currently support dynamic provisioning` https://kubernetes.io/docs/concepts/storage/storage-classes/#local. It means, you need to manually provision the PV. – georgeos Apr 14 '23 at 02:01
14

I just ran into this myself and was completely thrown for a loop until I realized that the StorageClass's VolumeBindingMode was set to WaitForFirstConsumer vice my intended value of Immediate. This value is immutable so you will have to:

  1. Get the storage class yaml:

    kubectl get storageclasses.storage.k8s.io gp2 -o yaml > gp2.yaml
    

    or you can also just copy the example from the docs here (make sure the metadata names match). Here is what I have configured:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: gp2
    provisioner: kubernetes.io/aws-ebs
    parameters:
      type: gp2
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    mountOptions:
      - debug
    volumeBindingMode: Immediate
    
  2. And delete the old StorageClass before recreating it with the new volumeBindingMode set to Immediate.

Note: The EKS clsuter may need perms to create cloud resources like EBS or EFS. Assuming EBS you should be good with arn:aws:iam::aws:policy/AmazonEKSClusterPolicy.

After doing this you should have no problem creating and using dynamically provisioned PVs.

Robert J
  • 840
  • 10
  • 20
4

For me the problem was mismatched accessModes fields in the PV and PVC. PVC was requesting RWX/ReadWriteMany while PV was offering RWO/ReadWriteOnce.

vladimirror
  • 729
  • 12
  • 8
3

In my case, I had claimRef without specified namespace.
Correct syntax is:

  claimRef:
    namespace: default
    name: my-claim

StatefulSet also prevented initialization, I had to replace it with a deployment
This was a f5g headache

Kiruahxh
  • 1,276
  • 13
  • 30
  • After trying all the solutions provided here in order, this one finally worked for me. One would think leaving namespace out would result in default namespace but apparently not in this case. – Ali Zaidi May 27 '22 at 16:15
  • Same here. My issue was that, I didn't set namespace, and also, after namespace set, I had accessModes set to ReadWriteMany while my StorageClass only accepted ReadWriteOnce. After fixing these two, everything went through – mona-mk Nov 12 '22 at 09:18
1

The accepted answer didn't work for me. I think it's because the app key won't be set before the the StatefulSet's Pods are deployed, preventing the PersistentVolumeClaim to match the nodeSelector (preventing the Pods to start with the error didn't find available persistent volumes to bind.). To fix this deadlock, I defined one PersistentVolume for each node (this may not be ideal but it worked):

apiVersion: v1
kind: PersistentVolume
metadata:
  name: postgres-data-node1
  labels:
    type: local
spec:
[…]
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - node1
pedroapero
  • 1,228
  • 10
  • 14
0

I'm stuck in this vicious loop myself.

I'm trying to create a kubegres cluster (which relies on dynamic provisioning as per my understanding).

I'm using RKE on a local-servers-like setup.

and I have the same scheduling issue as the one initially mentioned.

noting that the accessmode of the PVC (created by kubegres) is set to nothing as per the below output.

[rke@rke-1 manifests]$ kubectl get pv,PVC
 NAME                         CAPACITY   ACCESS MODES   RECLAIM POLICY        STATUS      CLAIM   STORAGECLASS    REASON   AGE
 persistentvolume/local-vol      20Gi    RWO                 Delete           Available           local-storage            40s

 NAME                                               STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE
 persistentvolumeclaim/local-vol-mypostgres-1-0     Pending                                      local-storage   6m42s
 persistentvolumeclaim/postgres-db-mypostgres-1-0   Pending                                      local-storage   6m42s

As an update, the issue in my case was that the PVC was not finding a Proper PV which was supposed to be dynamically provisioned. But for local storage classes, this feature is not yet supported therefore I had to use a third-party solution which solved my issue. https://github.com/rancher/local-path-provisioner

nour
  • 79
  • 1
  • 7
0

This issue mainly happens with WaitForFirstConsumer when you define the nodeName in the Deployment/Pod specifications. Please make sure you don't define nodeName and hardbind the pod through it. The should be resolved once you remove nodeName.

ouflak
  • 2,458
  • 10
  • 44
  • 49
0

I believe this can be a valid message that means that there are no containers started that have volumes that are bound to the persistent volume claim.

I experienced this issue on rancher desktop. It turned out the problem was caused by rancher not running properly after a macOS upgrade. The containers were not starting and would stay in a pending state.

After reseting the rancher desktop (using the UI), the containers were able to start well and the message disappeared.

xilef
  • 2,199
  • 22
  • 16
-1

waitforfirstconsumer-persistentvolumeclaim i.e. POD which requires this PVC is not scheduled. describe pods may give some more clue. In my case node was not able to schedule this POD since pod limit in node was 110 and deployment was exceeding it. Hope it helps to identify issue faster. increased the pod limit , restart kubelet in node solves it.

TheFixer
  • 89
  • 1
  • 2