44

The stateful set es-data was failing on our test environment and I was asked to delete corresponding PV.

So I deleted the following for es-data: 1) PVC 2) PV They showed as terminating and was left for the weekend. Upon arriving this morning they still showed as terminating so deleted both PVC and PV forcefully. No joy. To fix the whole thing I had to delete the stateful set.

Is this correct if you wanted to delete the PV?

mac
  • 1,479
  • 3
  • 11
  • 21

8 Answers8

67

You can delete the PV using following two commands:

kubectl delete pv <pv_name> --grace-period=0 --force

And then deleting the finalizer using:

kubectl patch pv <pv_name> -p '{"metadata": {"finalizers": null}}'
Prafull Ladha
  • 12,341
  • 2
  • 37
  • 58
  • If the PVC still exists, this will set the state of the PVC to `Lost`. I was forced to re-create the PVC to get it to create a new PV. – sshow Sep 16 '20 at 14:31
  • 3
    that's absolutely not how it's done! – user3192295 Oct 07 '21 at 05:41
  • This just removes the PV from `etcd` (or whatever key-value store you use) without verifying that it is actually deleted, very potentially leaving the relevant objects dangling. A significantly more correct answer is provided by @ns15. – Alwyn Jun 19 '23 at 09:40
14

It worked for me if I first delete the pvc, then the pv

kubectl delete pvc data-p-0
kubectl delete pv  <pv-name>  --grace-period=0 --force

Assuming one wants to delete the pvc as well, seems to hang otherwise

Mz A
  • 889
  • 11
  • 10
12

Firstly run kubectl patch pv {PVC_NAME} -p '{"metadata":{"finalizers":null}}'

then run kubectl delete pv {PVC_NAME}

Sunil Pandey
  • 121
  • 1
  • 3
  • Perhaps, these two commands need to be inverted to run, as @PrafullLadha said in his [answer](https://stackoverflow.com/a/54630036/3814775). Actually I tried myself. In order to delete the PV successfully, I have to run the delete command first (which would be stuck, have to use ctrl-c to get back), then use the patch command to delete the `finalizers` . – Bruce Oct 26 '21 at 22:03
8

Most answers on this thread simply mention the commands without explaining the root cause.

Here is a diagram to help understand better. refer to my other answer for commands and additional info -> https://stackoverflow.com/a/73534207/6563567

This diagram shows how to clean delete a volume enter image description here

In your case, the PVC and PV are stuck in terminating state because of finalizers. Finalizers are guard rails in k8s to avoid accidental deletion of resources.

Your observations are correct and this is how Kubernetes works. But the order in which you deleted the resources are incorrect.

This is what happened,

PV is stuck terminating because PVC still exists. PVC is stuck terminating because Statefulsets(pods) are still using the volumes. (volumes are attached to the nodes and mounted to the pods). As soon as you deleted the pods/STS, since volumes are no more in use, PVC and PV got successfully removed.

ns15
  • 5,604
  • 47
  • 51
6

At the beginning be sure that your Reclaim Policy is set up to Delete. After PVC is deleted, PV should be deleted.

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming

If it doesn't help, please check this [closed] Kubernetes PV issue: https://github.com/kubernetes/kubernetes/issues/69697

and try to delete the PV finalizers.

Tim Abell
  • 11,186
  • 8
  • 79
  • 110
4

HINT: PV volumes may be described like pvc-name-of-volume which may be confusing!

  • PV: Persistent Volume
  • PVC: Persistent Volume Clame
  • Pod -> PVC -> PV -> Host Machine

  1. First find the pvs: kubectl get pv -n {namespace}

  2. Then delete the pv in order set status to Terminating

kubectl delete pv {PV_NAME}

  1. Then patch it to set the status of pvc to Lost: kubectl patch pv {PV_NAME} -p '{"metadata":{"finalizers":null}}'

  2. Then get pvc volumes: kubectl get pvc -n storage

  3. Then you can delete the pvc: kubectl delete pvc {PVC_NAME} -n {namespace}


Theoretical example:

** Lets say we have kafka installed in storage namespace

$ kubectl get pv -n storage

$ kubectl delete pv pvc-ccdfe297-44c9-4ca7-b44c-415720f428d1

$ kubectl get pv -n storage (hanging but turns pv status to terminating)

$ kubectl patch pv pvc-ccdfe297-44c9-4ca7-b44c-415720f428d1 -p '{"metadata":{"finalizers":null}}'

$ kubectl get pvc -n storage

kubectl delete pvc data-kafka-0 -n storage

SiHa
  • 7,830
  • 13
  • 34
  • 43
long-blade
  • 721
  • 1
  • 5
  • 9
1

For me I followed this method and it worked fine for me.

kubectl delete pv {your-pv-name} --grace-period=0 --force

After that edit the pvc configuration.

kubectl edit pvc {your-pvc-name}

and remove finalizer from pvc configuration.

finalizers:
  -  kubernetes.io/pv-protection

You can read more about finalizer here in official kubernetes guide.

https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#finalizers

iamattiq1991
  • 746
  • 9
  • 11
0

kubectl delete pv [pv-name]

ksu you have to check about the policy of PV it should not be Reclaim Policy.

Harsh Manvar
  • 27,020
  • 6
  • 48
  • 102