3

Kubernetes Pods are stuck with a STATUS of Terminating after the Deployment (and Service) related to the Pods were deleted. Currently they have been in this state for around 3 hours.

The Deployment and Service were created from files, and then sometime later deleted by referencing the same files. The files were not changed in any way during this time.

kubectl apply -f mydeployment.yaml -f myservice.yaml
...
kubectl delete -f mydeployment.yaml -f myservice.yaml

Attempting to manually delete any of the Pods results in my terminal hanging until I press Ctrl+c.

kubectl kdelete pod mypod-ba97bc8ef-8rgaa --now

There is a GitHub issue that suggest outputting the logs to see the error, but no logs are available (note that "mycontainer" is the only container in "mypod" -

kubectl logs mypod-ba97bc8ef-8rgaa

Error from server (BadRequest): container "mycontainer" in pod "mypod-ba97bc8ef-8rgaa" is terminated

The aforementioned GitHub issue suggests that volume cleanup my be the issue. There are two volumes attached to the "mycontainer", but neither changed in anyway between creation and deletion of the Deployment (and neither did the Secret [generic] used to store the Azure Storage Account name and access key).

Although there are no logs available for the Pods, it is possible to describe them. However, there doesn't seem to be much useful information in there. Note that the Started and Finished times below are exactly as they are in the output to the describe command.

kubectl describe pod mypod-ba97bc8ef-8rgaa

>

Containers:
  mycontainer:
    ...
    State:          Terminated
      Exit Code:    0
      Started:      Mon, 01 Jan 0001 00:00:00 +0000
      Finished:     Mon, 01 Jan 0001 00:00:00 +0000

How can I discover what is causing the Pods to become stuck so that I can finally get rid of them?

David Gard
  • 11,225
  • 36
  • 115
  • 227

2 Answers2

7

After searching Google for a while I came up blank, but a suggested Stack Overflow question which appeared when I added my title saved the day.

kubectl delete pods mypod-ba97bc8ef-8rgaa --grace-period=0 --force
David Gard
  • 11,225
  • 36
  • 115
  • 227
  • 5
    Be aware that this command only deletes pod from etcd database but it does not check was the pod actually deleted from node. You'd better check the node for presence of container or simply drain the node and reboot it just to make sure. – Vasili Angapov May 01 '19 at 11:02
  • Good to know. I'm using AKS, and I'm unsure how to check if the pod was actually deleted from the node right now, but I will certainly look in to it. Thanks. – David Gard May 01 '19 at 12:45
  • @DavidGard did you find out if the pod was actually deleted from AKS's node? I have a similar problem. – Seeker Aug 22 '22 at 07:32
  • @Seeker, sorry, no idea as this was a few years ago. – David Gard Aug 30 '22 at 10:34
  • @VasiliAngapov , do you know if there there a `kill -9` ish that can be used. It appears that unless one can coerse a reboot of the node changes are that hung container pods can keep busylooping forever and k8s api-server cannot instrucgt kubelet to do anything, despite the kubelet could should kill -9 the conatiner... – humanityANDpeace Oct 15 '22 at 12:38
1

Not able to comment, but to add to David Gard answer you must also kill process on node where pod is located: docker ps -a | grep $POD_NAME and then sudo ps -aux | grep $CONTAINER_ID, There will be 2 processes: containerd-shim and runc. Kill containerd-shim process

zrks
  • 61
  • 6