I have a MySQL pod running in my cluster.
I need to temporarily pause the pod from working without deleting it, something similar to docker where the docker stop container-id
cmd will stop the container not delete the container.
Are there any commands available in kubernetes to pause/stop a pod?

- 4,689
- 8
- 34
- 58
4 Answers
So, like others have pointed out, Kubernetes doesn't support stop/pause of current state of pod and resume when needed. However, you can still achieve it by having no working deployments which is setting number of replicas to 0.
kubectl scale --replicas=0 deployment/<your-deployment>
see the help
# Set a new size for a Deployment, ReplicaSet, Replication Controller, or StatefulSet.
kubectl scale --help
Scale also allows users to specify one or more preconditions for the scale action.
If --current-replicas
or --resource-version
is specified, it is validated before the scale is attempted, and it is
guaranteed that the precondition holds true when the scale is sent to the server.
Examples:
# Scale a replicaset named 'foo' to 3.
kubectl scale --replicas=3 rs/foo
# Scale a resource identified by type and name specified in "foo.yaml" to 3.
kubectl scale --replicas=3 -f foo.yaml
# If the deployment named mysql's current size is 2, scale mysql to 3.
kubectl scale --current-replicas=2 --replicas=3 deployment/mysql
# Scale multiple replication controllers.
kubectl scale --replicas=5 rc/foo rc/bar rc/baz
# Scale statefulset named 'web' to 3.
kubectl scale --replicas=3 statefulset/web

- 3,608
- 3
- 13
- 25
-
Thanks for the tip, i think
should be – Philippe Simo May 20 '20 at 08:49 -
As I understand it, if there's an hpa, this is not possible. – nroose Jul 29 '20 at 17:57
-
That's correct. That's the reason I started with a statement what all have suggested and the answer is mostly in context of the question – sulabh chaturvedi Aug 04 '20 at 13:39
-
Actually, this is the right way to stop Deployments. This option also works for StatefulSets but won't for DaemonSets. DaemonSets need to be deleted and created again. – robotic_chaos Feb 01 '22 at 14:57
No. It is not possible to stop a pod and resume later when required. However, You can consider the below approach.
In k8s, pods are abstracted using a service. One way I can think of isolating the pod(s) is by updating the pod selector in the service definition. That way you can control the traffic to pod(s) using service definition. Whenever you want to restore the traffic update the pod selector value back to what it was in the service definition.

- 4,689
- 8
- 34
- 58

- 15,499
- 7
- 34
- 59
-
4too bad it doesnt help pods whose source is kafka/eventhub (and i wish to pause them, launch new pods and if it fails resume the old ones) – Martin Kosicky Jun 28 '19 at 11:58
-
Hi @MartinKosicky , that's exactly my use case, I have a container consuming from eventhubs trhough kafka protocol. did you find a solution? The only tworkarrounds that cames to my mind are ugly (change credentials, multiprocesses inside a container instead multiple containers in a pod, etc). – karlos9o Jan 13 '20 at 11:19
-
@karlos9o we actually just deleted the old pods, since its eventhub/kafka sourced, the zero downtime is not so important here. But if you really want it you can kubectl apply and change some configuration (pause processing). That should trigger redeploy of the pods – Martin Kosicky Jan 14 '20 at 18:54
-
Services are not meant to be abstractions of pods. Services are just a way to route different types of network traffic and port mapping. Scaling the number of replicas of a Deployments, and specifically the replica set to 0 (if there is no HPA present) would actually delete all instances of the pod – iamnicoj Dec 05 '21 at 16:31
With Kubernets, it's not possible to stop/pause a Pod. However, you can delete a Pod, given the fact you have the manifest to bring that back again.
However, if you want to delete a POD, knowing that it will immediately be launched again by the cluster, run the following kubectl
command.
kubectl delete -n default pod <your-pod-name>

- 8,353
- 4
- 51
- 54
-
6
-
9I think this is a valid response. There's nothing like "stopping" a pod. – SanjoS30 Jan 05 '22 at 17:08
-
23The pod will immediately come back again because the replicaset will re-create it -- that's its job. So this doesn't actually answer the question. – Jon Watte Feb 07 '22 at 18:11
-
2sometimes you want it to come right back. If a pod gets corrupted (running in an infinite loop or some nonsense), killing it and launching another is sometimes needed. – Garr Godfrey Apr 12 '23 at 17:55
For me it worked when I scaled the pods down to 0 in Helm's DeploymentConfig details in Openshift Console.

- 4,523
- 3
- 33
- 48