15

The situation

I have a kubernetes pod stuck in "Terminating" state that resists pod deletions

NAME                             READY STATUS       RESTARTS   AGE
...
funny-turtle-myservice-xxx-yyy   1/1   Terminating  1          11d
...

Where funny-turtle is the name of the helm release that have since been deleted.

What I have tried

try to delete the pod.

Output: pod "funny-turtle-myservice-xxx-yyy" deleted Outcome: it still shows up in the same state. - also tried with --force --grace-period=0, same outcome with extra warning

warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.

try to read the logs (kubectl logs ...).

Outcome: Error from server (NotFound): nodes "ip-xxx.yyy.compute.internal" not found

try to delete the kubernetes deployment.

but it does not exist.

So I assume this pod somehow got "disconnected" from the aws API, reasoning from the error message that kubectl logs printed.

I'll take any suggestions or guidance to explain what happened here and how I can get rid of it.

EDIT 1

Tried to see if the "ghost" node was still there (kubectl delete node ip-xxx.yyy.compute.internal) but it does not exist.

Community
  • 1
  • 1
Yann Pellegrini
  • 793
  • 3
  • 7
  • 19

4 Answers4

18

Try removing the finalizers from the pod:

kubectl patch pod funny-turtle-myservice-xxx-yyy -p '{"metadata":{"finalizers":null}}'
David Gard
  • 11,225
  • 36
  • 115
  • 227
jaxxstorm
  • 12,422
  • 5
  • 57
  • 67
8

In my case, the solution proposed by the accepted answer did not work, it kept stuck in "Terminating" status. What did the trick for me was:

kubectl delete pods <pod> --grace-period=0 --force
João Matos
  • 6,102
  • 5
  • 41
  • 76
  • 8
    This does not actually delete the pod. It only deletes the reference to the container and the container may continue to run forever – Isen Ng Nov 14 '19 at 09:36
1

The above solutions did not work in my case, except I didn't try restarting all the nodes.

The error state for my pod was as follows (extra lines omitted):

$ kubectl -n myns describe pod/mypod  
Status:                    Terminating (lasts 41h)  
Containers:  
  runner:  
    State:          Waiting  
      Reason:       ContainerCreating  
    Last State:     Terminated  
      Reason:       ContainerStatusUnknown  
      Message:      The container could not be located when the pod was deleted.
                    The container used to be Running  
      Exit Code:    137  

$ kubectl -n myns get pod/mypod -o json  
    "metadata": {
        "deletionGracePeriodSeconds": 0,  
        "deletionTimestamp": "2022-06-07T22:17:20Z",  
        "finalizers": [  
            "actions.summerwind.dev/runner-pod"  
        ],  

I removed the entry under finalizers (leaving finalizers as empty array) and then the pod was finally gone.

$ kubectl -n myns edit pod/mypod
pod/mypod edited
Meli
  • 53
  • 6
0

In my case nothing worked, no logs, no delete, absolutely nothing. I had to restart all the nodes, then the situation cleared up, no more Terminating pods.

Tudor
  • 2,224
  • 2
  • 21
  • 24
  • 1
    You might want to try this: https://stackoverflow.com/questions/52954174/kubernetes-namespaces-stuck-in-terminating-status/60328565#60328565 – ratr Mar 16 '20 at 05:29