163

I have a MySQL pod running in my cluster.
I need to temporarily pause the pod from working without deleting it, something similar to docker where the docker stop container-id cmd will stop the container not delete the container.
Are there any commands available in kubernetes to pause/stop a pod?

AATHITH RAJENDRAN
  • 4,689
  • 8
  • 34
  • 58

4 Answers4

283

So, like others have pointed out, Kubernetes doesn't support stop/pause of current state of pod and resume when needed. However, you can still achieve it by having no working deployments which is setting number of replicas to 0.

kubectl scale --replicas=0 deployment/<your-deployment>

see the help

# Set a new size for a Deployment, ReplicaSet, Replication Controller, or StatefulSet.
kubectl scale --help

Scale also allows users to specify one or more preconditions for the scale action.

If --current-replicas or --resource-version is specified, it is validated before the scale is attempted, and it is guaranteed that the precondition holds true when the scale is sent to the server.

Examples:

  # Scale a replicaset named 'foo' to 3.
  kubectl scale --replicas=3 rs/foo

  # Scale a resource identified by type and name specified in "foo.yaml" to 3.
  kubectl scale --replicas=3 -f foo.yaml

  # If the deployment named mysql's current size is 2, scale mysql to 3.
  kubectl scale --current-replicas=2 --replicas=3 deployment/mysql

  # Scale multiple replication controllers.
  kubectl scale --replicas=5 rc/foo rc/bar rc/baz

  # Scale statefulset named 'web' to 3.
  kubectl scale --replicas=3 statefulset/web
sulabh chaturvedi
  • 3,608
  • 3
  • 13
  • 25
13

No. It is not possible to stop a pod and resume later when required. However, You can consider the below approach.

In k8s, pods are abstracted using a service. One way I can think of isolating the pod(s) is by updating the pod selector in the service definition. That way you can control the traffic to pod(s) using service definition. Whenever you want to restore the traffic update the pod selector value back to what it was in the service definition.

AATHITH RAJENDRAN
  • 4,689
  • 8
  • 34
  • 58
P Ekambaram
  • 15,499
  • 7
  • 34
  • 59
  • 4
    too bad it doesnt help pods whose source is kafka/eventhub (and i wish to pause them, launch new pods and if it fails resume the old ones) – Martin Kosicky Jun 28 '19 at 11:58
  • Hi @MartinKosicky , that's exactly my use case, I have a container consuming from eventhubs trhough kafka protocol. did you find a solution? The only tworkarrounds that cames to my mind are ugly (change credentials, multiprocesses inside a container instead multiple containers in a pod, etc). – karlos9o Jan 13 '20 at 11:19
  • @karlos9o we actually just deleted the old pods, since its eventhub/kafka sourced, the zero downtime is not so important here. But if you really want it you can kubectl apply and change some configuration (pause processing). That should trigger redeploy of the pods – Martin Kosicky Jan 14 '20 at 18:54
  • Services are not meant to be abstractions of pods. Services are just a way to route different types of network traffic and port mapping. Scaling the number of replicas of a Deployments, and specifically the replica set to 0 (if there is no HPA present) would actually delete all instances of the pod – iamnicoj Dec 05 '21 at 16:31
8

With Kubernets, it's not possible to stop/pause a Pod. However, you can delete a Pod, given the fact you have the manifest to bring that back again.

However, if you want to delete a POD, knowing that it will immediately be launched again by the cluster, run the following kubectl command.

kubectl delete -n default pod <your-pod-name>
Anjana Silva
  • 8,353
  • 4
  • 51
  • 54
0

For me it worked when I scaled the pods down to 0 in Helm's DeploymentConfig details in Openshift Console.

Riho
  • 4,523
  • 3
  • 33
  • 48