1

I upgrade machine of GKE to the new larger machine. All application looks fine until one pod still on pending state. After cordon and drain set to the old pods, my prometheus pod still pending.

On pod describe it shows enter image description here

I already recreate both deployment and PVC, but the result still the same. On PVC it shows..

enter image description here

enter image description here

On the describe, it said that the PVC used by prometheus deployment. The fact is the deployment still at pending state. How to resolve this? any suggestion would be appreciated

1 Answers1

1

Pending status of the PVCs could mean you have no corresponding PVs. If you use a PersistentVolumeClaim you typically need a volume provisioner for Dynamic Volume Provisioning

Unless you configure your cluster with dynamic volume provisioning , you will have to make the PV manually each time.

You have to define a PersistentVolume providing disk space to be consumed by the PersistentVolumeClaim. PersistentVolumeClaims will remain unbound indefinitely if a matching PersistentVolume does not exist.

When using storageClass Kubernetes is going to enable "Dynamic Volume Provisioning" which is not working with the local file system.

Dynamic volume provisioning allows storage volumes to be created on-demand. Without dynamic provisioning, cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes, and then create PersistentVolume objects to represent them in Kubernetes.

To solve your issue:

  • Provide a PersistentVolume fulfilling the constraints of the claim (a size >= 100Mi).
  • Remove the storageClass from the PersistentVolumeClaim or provide it with an empty value ("").
  • Remove the StorageClass from your cluster.

Also make sure that PV capacity >= PVC capacity then PVC should be bound to PV. The capacity in the PV needs to be the same as in the claim to fix the unbound immediate PersistentVolumeClaims issue. If not then we'll get the unbound immediate PersistentVolumeClaims error in the pod level and no volume plugin matched name when describing the PVC.

Refer Configure a Pod to Use a PersistentVolume for Storage which describes how to create a PersistentVolume with a hostPath and refer stackpost for more information on pod has unbound PersistentVolumeClaims error.

Jyothi Kiranmayi
  • 2,090
  • 5
  • 14
  • Hi, thank you for your reply! It's works fine now. On my case, the other PVC works well. Grafana and AlertManager PVC run normally after migrating. I'm curious what happened with prometheus since just this PVC won't active even after create new PVC, describe logs said it bound to pod but the pod still not recognize the PVC. Still not found the culprit until now. Then suddenly work as usually. Any idea why this happened? I'm using GKE for the kubernetes – Dhody Rahmad Hidayat Apr 27 '22 at 03:28
  • As mentioned in the answer, PersistentVolumeClaims will remain unbound indefinitely if a matching PersistentVolume does not exist. So, you will need to create PersistentVolume fulfilling the constraints to avoid these type of errors. – Jyothi Kiranmayi Apr 27 '22 at 03:45
  • I used GCP StorageClass for dynamic volume provisioning. I use this for other PVC as well. I'm not sure if I have to remove the StorageClass since this is company cloud that might causing problem in the future. Maybe, Is it okay if I just create new PV then PVC if the problem appear again? – Dhody Rahmad Hidayat Apr 27 '22 at 04:33
  • Yes if the problem appears again try creating new PV and PVC. Also if the answer was useful, please upvote or mark the answer as accepted for greater visibility for community members. – Jyothi Kiranmayi Apr 27 '22 at 04:39