4

I usually restart my applications by:

kubectl scale deployment my-app --replicas=0

Followed by:

kubectl scale deployment my-app --replicas=1

which works fine. I also have another running application but when I look at its replicaset I see:

$ kubectl get rs
NAME                                        DESIRED   CURRENT   READY     AGE
another-app                                 2         2         2         2d

So to restart that correctly I would of course need to:

kubectl scale deployment another-app --replicas=0
kubectl scale deployment another-app --replicas=2

But is there a better way to do this so I don't have to manually look at the repliasets before scaling/restarting my application (that might have replicas > 1)?

u123
  • 15,603
  • 58
  • 186
  • 303
  • Hmmm I would say it depends on the **why**. Why are you restarting ? Is it in order to update it ? To apply new configuration/secret ? Or just because it's not responding ? – Marc ABOUCHACRA May 13 '20 at 08:32
  • Its to apply new configuration, e.g. changes to deployment config, secrets, configmaps etc. – u123 May 13 '20 at 08:46
  • Then I'm afraid there is no *correct* solution for now and what you're doing is ok. You can check this thread for more info https://stackoverflow.com/questions/37317003/restart-pods-when-configmap-updates-in-kubernetes – Marc ABOUCHACRA May 13 '20 at 09:20

2 Answers2

6

You can restart pods by using level

kubectl delete pods -l name=myLabel

You can rolling restart of all pods for a deployments, so that you don't take the service down

kubectl patch deployment your_deployment_name -p \
  "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"

And After kubernetes version 1.15 you can

kubectl rollout restart deployment your_deployment_name
hoque
  • 5,735
  • 1
  • 19
  • 29
  • 1
    But delete pod will not catch any changes that might have been applied to the deployment config. – u123 May 13 '20 at 08:46
  • When pod will be recreated i think it will use latest template from deployment – hoque May 13 '20 at 08:53
  • No. If I change e.g. readiness checks in my deployment they are not updated when a pod is restarted after its deleted. – u123 May 13 '20 at 09:24
  • Note that the `kubectl rollout restart` command doesn't need any server-side support; so long as your local Kubernetes CLI is new enough, you can use that command with an arbitrarily old (compatible, supported) cluster. – David Maze May 13 '20 at 11:52
  • If you're changing things like readiness checks in the Deployment object it should recreate the Pods on its own. – David Maze May 13 '20 at 11:52
4

To make changes in your current deployment you can use kubectl rollout pause deployment/YOUR_DEPLOYMENT. This way the deployment will be marked as paused and won't be reconciled by the controller. After it's paused you can make necessary changes to your configuration and then resume it by using kubectl rollout resume deployment/YOUR_DEPLOYMENT. This way it will create a new replicaset with updated configuration.

Pod with new configuration will be started and when it's in running status, pod with old configuration will be terminated.

Using this method you will be able to rollout the deployment to previous version by using:

kubectl rollout history deployment/YOUR_DEPLOYMENT

to check history of the rollouts and then execute following command to rollback:

kubectl rollout undo deployment/YOUR_DEPLOYMENT --to-revision=REVISION_NO
kool
  • 3,214
  • 1
  • 10
  • 26