44

How do I force delete Namespaces stuck in Terminating?

Steps to recreate:

  1. Apply this YAML
apiVersion: v1
kind: Namespace
metadata:
  name: delete-me
spec:
  finalizers:
    - foregroundDeletion
  1. kubectl delete ns delete-me

  2. It is not possible to delete delete-me.

The only workaround I've found is to destroy and recreate the entire cluster.

Things I've tried:

None of these work or modify the Namespace. After any of these the problematic finalizer still exists.

Edit the YAML and kubectl apply

Apply:

apiVersion: v1
kind: Namespace
metadata:
  name: delete-me
spec:
  finalizers:
$ kubectl apply -f tmp.yaml 

namespace/delete-me configured

The command finishes with no error, but the Namespace is not udpated.

The below YAML has the same result:

apiVersion: v1
kind: Namespace
metadata:
  name: delete-me
spec:

kubectl edit

kubectl edit ns delete-me, and remove the finalizer. Ditto removing the list entirely. Ditto removing spec. Ditto replacing finalizers with an empty list.

$ kubectl edit ns delete-me 

namespace/delete-me edited

This shows no error message but does not update the Namespace. kubectl editing the object again shows the finalizer still there.

kubectl proxy &

  • kubectl proxy &
  • curl -k -H "Content-Type: application/yaml" -X PUT --data-binary @tmp.yaml http://127.0.0.1:8001/api/v1/namespaces/delete-me/finalize

As above, this exits successfully but does nothing.

Force Delete

kubectl delete ns delete-me --force --grace-period=0

This actually results in an error:

warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
Error from server (Conflict): Operation cannot be fulfilled on namespaces "delete-me": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.

However, it doesn't actually do anything.

Wait a long time

In the test cluster I set up to debug this issue, I've been waiting over a week. Even if the Namespace might eventually decide to be deleted, I need it to be deleted faster than a week.

Make sure the Namespace is empty

The Namespace is empty.

$ kubectl get -n delete-me all

No resources found.

etcdctl

$ etcdctl --endpoint=http://127.0.0.1:8001 rm /namespaces/delete-me

Error:  0:  () [0]

I'm pretty sure that's an error, but I have no idea how to interpret that. It also doesn't work. Also tried with --dir and -r.

ctron/kill-kube-ns

There is a script for force deleting Namespaces. This also does not work.

$ ./kill-kube-ns delete-me

Killed namespace: delete-me

$ kubectl get ns delete-me 

NAME        STATUS        AGE
delete-me   Terminating   1h

POSTing the edited resource to /finalize

Returns a 405. I'm not sure if this is the canonical way to POST to /finalize though.

Links

This appears to be a recurring problem and none of these resources helped.

Kubernetes bug

Will Beason
  • 3,417
  • 2
  • 28
  • 46
  • 1
    You're doing it correct, but there's something that takes a long time to delete inside. There's no way to force it more. Just check what's left `kubectl -n get all -o yaml` maybe it gives you some more info. – Max Lobur Apr 25 '19 at 16:09
  • 1
    Have you tried removing it from etcd? LIke etcdctl rm /namespaces/delete-me ? – Vasili Angapov Apr 25 '19 at 16:10
  • 1
    @MaxLobur The Namespaces are empty, and in some cases I've been waiting longer than a week for deletion. Updated post. – Will Beason Apr 25 '19 at 16:30
  • I have got this issue for any namespace I create: `microk8s v1.26.0 on Ubuntu 22.04`. The answers work, but it would be nice to know (+ fix) the actual root cause. – hey Jan 12 '23 at 20:05

7 Answers7

37

The kubectl proxy try is almost correct, but not quite. It's possible using JSON instead of YAML does the trick, but I'm not certain.

The JSON with an empty finalizers list:

~$ cat ns.json

{
  "kind": "Namespace",
  "apiVersion": "v1",
  "metadata": {
    "name": "delete-me"
  },
  "spec": {
    "finalizers": []
  }
}

Use curl to PUT the object without the problematic finalizer.

~$ curl -k -H "Content-Type: application/json" -X PUT --data-binary @ns.json http://127.0.0.1:8007/api/v1/namespaces/delete-me/finalize

{
  "kind": "Namespace",
  "apiVersion": "v1",
  "metadata": {
    "name": "delete-me",
    "selfLink": "/api/v1/namespaces/delete-me/finalize",
    "uid": "0df02f91-6782-11e9-8beb-42010a800137",
    "resourceVersion": "39047",
    "creationTimestamp": "2019-04-25T17:46:28Z",
    "deletionTimestamp": "2019-04-25T17:46:31Z",
    "annotations": {
      "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"delete-me\"},\"spec\":{\"finalizers\":[\"foregroundDeletion\"]}}\n"
    }
  },
  "spec": {

  },
  "status": {
    "phase": "Terminating"
  }
}

The Namespace is deleted!

~$ kubectl get ns delete-me

Error from server (NotFound): namespaces "delete-me" not found
Will Beason
  • 3,417
  • 2
  • 28
  • 46
35

I loved this answer extracted from here

In one terminal:

kubectl proxy

In another terminal:

kubectl get ns delete-me -o json | \
  jq '.spec.finalizers=[]' | \
  curl -X PUT http://localhost:8001/api/v1/namespaces/delete-me/finalize -H "Content-Type: application/json" --data @-
dbustosp
  • 4,208
  • 25
  • 46
17

Applying this command after replacing the two occurrences of <NAME_OF_NAMESPACE> with the actual name of the namespace in Termination can solve the issue:

kubectl get ns <NAME_OF_NAMESPACE> -o json | jq '.spec.finalizers = []' | kubectl replace --raw "/api/v1/namespaces/<NAME_OF_NAMESPACE>/finalize" -f -

Explanation:

Most replies seem to do the same thing here; remove the finalizers from the namespace. In this case, this is done in three steps:

  1. kubectl get ns <NAME_OF_NAMESPACE> -o json returns the namespace configuration in json format. This is piped into the next command:
  2. jq '.spec.finalizers = []' removes all finalizers from the json configuration. The resulting json (without finalizers) is then piped into the next command:
  3. kubectl replace --raw "/api/v1/namespaces/<NAME_OF_NAMESPACE>/finalize" -f -, injects the updated json namespace configuration (without finalizers) into k8s.
hey
  • 2,643
  • 7
  • 29
  • 50
Ushakov Roman
  • 171
  • 1
  • 2
  • 2
    Hello, welcome to SO. You should always give a little explanation to your code. – Andy A. Aug 30 '21 at 05:32
  • While this all works perfectly well with this workaround, I wonder why it happens in the first instance. For me, this workaround is required whenever I `kubectl create ns ` and then try to delete it a few minutes after, without modifying it in any way, or without creating elements. – hey Jan 08 '23 at 23:17
12

Here is a modification to the command provided by the user Ushakov Roman (detailed explanation see there). Compared to Ushakov Roman's solution, defining the namespace variable in the beginning helps to reduce the number of places where the namespace name actually has to be typed:

namespace=<NAME_OF_NAMESPACE> && kubectl get ns $namespace  -o json | jq '.spec.finalizers = []' | kubectl replace --raw "/api/v1/namespaces/$namespace/finalize" -f -
hey
  • 2,643
  • 7
  • 29
  • 50
Jetendra Laha
  • 121
  • 1
  • 2
2

This article brought me here for a second time, but this time we are using Rancher (RKE2) Kubernetes edition.

The trick here is to call directly rancher api in order to pass deletion request... ( trick with proxy does not work). Hope this help someone

Don't forget to change the Bearer Token

export NS='delete-me';
export YOURFQDN='rancher2.dev.prod.local';
export YOURCLUSTER='c-xxxx1';

kubectl get ns ${NS} -o json | jq '.spec.finalizers=[]' | \
curl -X PUT https://${YOURFQDN}/k8s/clusters/${YOURCLUSTER}/api/v1/namespaces/${NS}/finalize \
-H "Accept: application/json" \
-H "Authorization: Bearer token-xxxx:xxxxYOURxxxxTOKENxxxx" \
-H "Content-Type: application/json" --data @-
mati kepa
  • 2,543
  • 19
  • 24
  • 2
    Just a footnote to this: I tried to delete a stuck namespace on my RKE2 worker cluster by "k replace --raw ..." and got this error. `Error from server (Conflict): Operation cannot be fulfilled on namespaces "MY-TERMINATING-NS": StorageError: invalid object, Code: 4, Key: /registry/namespaces/MY-TERMINATING-NS, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 97ff9db3-4e74-45de-b6da-7f8305b8f9f7, UID in object meta:` Using the above worked – Trellan5E Jun 03 '22 at 19:41
1

I had the same issue in a rke2/Harvester cluster. I tried everything (curl the k8s API, forced delete etc.)

There where no resource left in the namespace.

Lastly i removed the key from etcd:

ETCDCTL_API=3 etcdctl \
  --cert=/var/lib/rancher/rke2/server/tls/etcd/client.crt \
  --key=/var/lib/rancher/rke2/server/tls/etcd/client.key \
  --cacert=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt \
  del /registry/namespaces/<NAMESPACE_TO_DELETE>

Certainly not the best method to solve the issue. But may be the last resort.

isi
  • 11
  • 1
0

In case you are not able to run the command and are getting any parsing error:

curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/[your-namespace]/finalize

Then make sure that the json that you have is having correct finalize like below, note that finalize have [] :

"spec": {
"finalizers":[
]
Nilesh Kumar
  • 109
  • 1
  • 5