201

I have a pod test-1495806908-xn5jn with 2 containers. I'd like to restart one of them called container-test. Is it possible to restart a single container within a pod and how? If not, how do I restart the pod?

The pod was created using a deployment.yaml with:

kubectl create -f deployment.yaml
s5s
  • 11,159
  • 21
  • 74
  • 121

15 Answers15

208

Is it possible to restart a single container

Not through kubectl, although depending on the setup of your cluster you can "cheat" and docker kill the-sha-goes-here, which will cause kubelet to restart the "failed" container (assuming, of course, the restart policy for the Pod says that is what it should do)

how do I restart the pod

That depends on how the Pod was created, but based on the Pod name you provided, it appears to be under the oversight of a ReplicaSet, so you can just kubectl delete pod test-1495806908-xn5jn and kubernetes will create a new one in its place (the new Pod will have a different name, so do not expect kubectl get pods to return test-1495806908-xn5jn ever again)

mdaniel
  • 31,240
  • 5
  • 55
  • 58
  • 13
    The default restart policy is always restart – Hem Jan 11 '18 at 22:54
  • 2
    If I can do this: `docker kill the-sha-goes-here` , then why not do `docker container restart the-sha-goes-here` instead? why rely on `kubelet` to restart it? Anyway, the real problem is that where do I run the `docker` command even be it to kill the container. On `could-shell`, `docker` does not show the containers from the k8s clusters! – Nawaz Apr 08 '20 at 16:40
  • A good suggestion is to have the `POD_NAME` as an environment variable within your running containers. That way, when you need to add some logic to restart the pod/container, you can easily reference it by using the env variable defined `$POD_NAME` – Paul Chibulcuteanu Nov 12 '21 at 12:17
  • 1
    I feel like this requires an up-to-date answer (I can't provide yet). Docker is not a supported compute engine for k8s going forward. Perhaps there is a `crictl` command or a more complete instruction for adjusting rolebindings and using `kubectl` to delete / reapply a pod or a specific container. – Lon Kaut Jan 14 '22 at 13:07
  • The correct answer is to move the container to its own pod – MikeKulls Mar 08 '22 at 23:50
  • instead of `docker kill`, what about just `kill`ing the container main process directly on the host, or `kill 1` inside the container (if it is not a distroless image, or use an ephemeral container)? – SOFe Mar 24 '23 at 04:05
110

There are cases when you want to restart a specific container instead of deleting the pod and letting Kubernetes recreate it.

Doing a kubectl exec POD_NAME -c CONTAINER_NAME /sbin/killall5 worked for me.

(I changed the command from reboot to /sbin/killall5 based on the below recommendations.)

Zsolt Katona
  • 1,373
  • 1
  • 8
  • 11
  • 33
    Not every container has `reboot`; I had more luck with executing `/sbin/killall5` instead; that kills all processes, and the container will exit. – Ingo Karkat Apr 06 '18 at 16:37
  • 1
    And not every container has root user ;) – JuliSmz Dec 05 '18 at 20:03
  • 6
    -1, because... You're using the side effect of 'reboot' killing all processes and Kubernetes recovery of it. It's making a lot of assumptions: running as root, availability of the binary in the container, a restartPolicy that's enabled, etc. Also, this clutters the logs about a failure of the process, which isn't ideal. – gertvdijk Feb 15 '19 at 13:28
  • 9
    So looks like alpine doesn't have the killall, but /sbin/reboot works great. `kubectl exec POD_NAME -c CONTAINER_NAME /sbin/reboot` worked like a charm – Atif Mar 20 '20 at 16:20
  • @Atif can I prevent losing my changes inside the container after executing `reboot`? e.g., I install vim, but after calling reboot I don't find vim (the same for any file change if not persisted). – Mohammed Noureldin Sep 07 '22 at 22:31
  • Does rebooting drop your changes? I would think you would only look the changes if the POD is killed by k8s and respawned. If you need tools for debugging, I think there is a trick to tack on a debugging container as well. You might want to google how that works. Haven't tried it myself – Atif Sep 09 '22 at 16:32
94

Both pod and container are ephemeral, try to use the following command to stop the specific container and the k8s cluster will restart a new container.

kubectl exec -it [POD_NAME] -c [CONTAINER_NAME] -- /bin/sh -c "kill 1"

This will send a SIGTERM signal to process 1, which is the main process running in the container. All other processes will be children of process 1, and will be terminated after process 1 exits. See the kill manpage for other signals you can send.

orodbhen
  • 2,644
  • 3
  • 20
  • 29
ROY
  • 8,800
  • 1
  • 17
  • 13
  • 10
    I tried other answers and this one was the only that worked for me, it seems to me that it is the most general one. – Batato May 22 '19 at 17:01
  • how do I get the container name that is running inside a pod?? – AATHITH RAJENDRAN Jul 05 '19 at 03:41
  • 1
    My Alpine container went into an unhealthy status of some sort when I tried this. kubectl get po shows Error in the status column for the pod.. – Atif Mar 20 '20 at 16:26
  • @AATHITHRAJENDRAN you can use the deploy name and it will select the pod automatically – Kamafeather Nov 24 '20 at 19:33
  • Worked after trying other suggested commands. Thanks. – Jaydeep Soni Nov 12 '21 at 20:20
  • I am frustrated why other people says there is no way to do that, while there is. It worked like a charm. Thank you! – wajdi_jurry Aug 22 '22 at 21:04
  • Does this solution also remove the container/pod? Because all not-persisted changes inside the containers are simply gone. – Mohammed Noureldin Sep 07 '22 at 19:13
  • This causes my "not persisted" changes to disappear. Can I somehow keep my changes in the container and not lose them after restarting it? – Mohammed Noureldin Sep 07 '22 at 22:32
  • This is the best one for me. The only issue I had with this is that I was needed to change the container run to run from the process [entry point](https://hynek.me/articles/docker-signals/) so the process will get the signal. – lior.i Jan 19 '23 at 09:37
26

I m using

kubectl rollout restart deployment [deployment_name]

or

kubectl delete pod [pod_name]

jasondayee
  • 549
  • 4
  • 11
25

The whole reason for having kubernetes is so it manages the containers for you so you don't have to care so much about the lifecyle of the containers in the pod.

Since you have a deployment setup that uses replica set. You can delete the pod using kubectl delete pod test-1495806908-xn5jn and kubernetes will manage the creation of a new pod with the 2 containers without any downtime. Trying to manually restart single containers in pods negates the whole benefits of kubernetes.

Innocent Anigbo
  • 4,435
  • 1
  • 19
  • 19
16

All the above answers have mentioned deleting the pod...but if you have many pods of the same service then it would be tedious to delete each one of them...

Therefore, I propose the following solution, restart:

  • 1) Set scale to zero :

     kubectl scale deployment <<name>> --replicas=0 -n service 
    

    The above command will terminate all your pods with the name <<name>>

  • 2) To start the pod again, set the replicas to more than 0

    kubectl scale deployment <<name>> --replicas=2 -n service
    

    The above command will start your pods again with 2 replicas.

Nicolás Alarcón Rapela
  • 2,714
  • 1
  • 18
  • 29
Ajay Reddy
  • 1,475
  • 1
  • 16
  • 20
  • 12
    Question was asking about how to restart a single container within a pod. – Chris Beach Oct 01 '19 at 14:14
  • Also, scaling down to 0 pods will not work for highly available applications. Use `kubectl patch deployment -p "{\"spec\": {\"template\": {\"metadata\": { \"labels\": { \"redeploy\": \"$(date +%s)\"}}}}}"` instead. This will update the deployment and therefore initiate recreation of all pods managed by it according to rolling update strategy. – Kostrahb Apr 20 '20 at 12:07
5

We use a pretty convenient command line to force re-deployment of fresh images on integration pod.
We noticed that our alpine containers all run their "sustaining" command on PID 5. Therefore, sending it a SIGTERM signal takes the container down. imagePullPolicy being set to Always has the kubelet re-pull the latest image when it brings the container back.

kubectl exec -i [pod name] -c [container-name] -- kill -15 5
Alexis LEGROS
  • 539
  • 10
  • 14
  • 1
    what does -15 and 5 represent? – John Balvin Arias Oct 19 '18 at 10:57
  • 4
    @JohnBalvinArias it's tucked into the description above, but in `kill -15 5` you're running the kill command to send signal "-15" to the process with the PID 5. This is how you tell a process that you would like for it to terminate (SIGTERM) and have it take the time to clean up any opened resources (temp files, rollback db transactions, close connections, whatever). Contrasted with -9 (SIGKILL), kills the process immediately, not allowing it to clean up any opened resources. – Conrad.Dean Oct 20 '18 at 12:15
4
kubectl exec -it POD_NAME -c CONTAINER_NAME bash - then kill 1

Assuming the container is run as root which is not recommended.

In my case when I changed the application config, I had to reboot the container which was used in a sidecar pattern, I would kill the PID for the spring boot application which is owned by the docker user.

GreenThumb
  • 483
  • 1
  • 7
  • 25
3

There was an issue in coredns pod, I deleted such pod by

kubectl delete pod -n=kube-system coredns-fb8b8dccf-8ggcf

Its pod will restart automatically.

j3ffyang
  • 1,049
  • 12
  • 12
2

I realize this question is old and already answered, but I thought I'd chip in with my method.

Whenever I want to do this, I just make a minor change to the pod's container's image field, which causes kubernetes to restart just the container.

If you can't switch between 2 different, but equivalent tags (like :latest / :1.2.3 where latest is actually version 1.2.3) then you can always just switch it quickly to an invalid tag (I put an X at the end like :latestX or something) and then re-edit it and remove the X straight away afterwards, this does cause the container to fail starting with an image pull error for a few seconds though.

So for example:

kubectl edit po my-pod-name

Find the spec.containers[].name you want to kill, then find it's image

apiVersion: v1
kind: Pod
metadata:
  #...
spec:
  containers:
  - name: main-container
    #...
  - name: container-to-restart
    image: container/image:tag
#...

You would search for your container-to-restart and then update it's image to something different which will force kubernetes to do a controlled restart for you.

Ken Allan
  • 21
  • 1
1

Killing the process specified in the Dockerfile's CMD / ENTRYPOINT works for me. (The container restarts automatically)

Rebooting was not allowed in my container, so I had to use this workaround.

Kevin
  • 2,775
  • 4
  • 16
  • 27
1

The correct, but likely less popular answer, is that if you need to restart one container in a pod then it shouldn't be in the same pod. You can't restart single containers in a pod by design. Just move the container out into it's own pod. From the docs

Pods that run a single container. The "one-container-per-Pod" model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container; Kubernetes manages Pods rather than managing the containers directly.

Note: Grouping multiple co-located and co-managed containers in a single Pod is a relatively advanced use case. You should use this pattern only in specific instances in which your containers are tightly coupled.

https://kubernetes.io/docs/concepts/workloads/pods/

MikeKulls
  • 873
  • 1
  • 10
  • 22
0

I was playing around with ways to restart a container. What I found for me was this solution:

Dockerfile:

...
ENTRYPOINT [ "/app/bootstrap.sh" ]

/app/bootstrap.sh:

#!/bin/bash
/app/startWhatEverYouActuallyWantToStart.sh &
tail -f /dev/null

Whenever I want to restart the container, I kill the process with tail -f /dev/null which I find with

kill -TERM `ps --ppid 1 | grep tail | grep -v -e grep | awk '{print $1}'`

Following that command, all the processes except for the one with PID==1 will be killed and the entrypoint, in my case bootstrap.sh will be executed (again).

That's for the part "restart" - which is not really a restart but it does what you wish, in the end. For the part with limiting restarting the container named container-test you could pass on the container name to the container in question (as the container name would otherwise not be available inside the container) and then you can decide whether to do the above kill. That would be something like this in your deployment.yaml:

    env:
    - name: YOUR_CONTAINER_NAME
      value: container-test

/app/startWhatEverYouActuallyWantToStart.sh:

#!/bin/bash
...
CONDITION_TO_RESTART=0
...
if [ "$YOUR_CONTAINER_NAME" == "container-test" -a $CONDITION_TO_RESTART -eq 1 ]; then
    kill -TERM `ps --ppid 1 | grep tail | grep -v -e grep | awk '{print $1}'`
fi
Michael
  • 445
  • 4
  • 16
-1

Sometimes no one knows which OS the pod has, pod might not have sudo or reboot altogether.

Safer option is to take a snapshot and recreate pod.

kubectl get <pod-name> -o yaml > pod-to-be-restarted.yaml; 
kubectl delete po <pod-name>; 
kubectl create -f pod-to-be-restarted.yaml
Akshay
  • 3,558
  • 4
  • 43
  • 77
-1
kubectl delete pods POD_NAME

This command will delete the pod and restart another automatically.