11

I have 6 replicas of a pod running which I would like to restart\recreate every 5 minutes.

This needs to be a rolling update - so that all are not terminated at once and there is no downtime. How do I achieve this?

I tried using cron job, but seems not to be working :

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: scheduled-pods-recreate
spec:
  schedule: "*/5 * * * *"
  concurrencyPolicy: Forbid
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: ja-engine
            image: app-image
            imagePullPolicy: IfNotPresent
          restartPolicy: OnFailure

Although the job was created successfully and scheduled as per description below, it seems to have never run:

Name:                       scheduled-pods-recreate
Namespace:                  jk-test
Labels:                     <none>
Annotations:                <none>
Schedule:                   */5 * * * *
Concurrency Policy:         Forbid
Suspend:                    False
Starting Deadline Seconds:  <unset>
Selector:                   <unset>
Parallelism:                <unset>
Completions:                <unset>
Pod Template:
  Labels:  <none>
  Containers:
   ja-engine:
    Image:           image_url
    Port:            <none>
    Host Port:       <none>
    Environment:     <none>
    Mounts:          <none>
  Volumes:           <none>
Last Schedule Time:  Tue, 19 Feb 2019 10:10:00 +0100
Active Jobs:         scheduled-pods-recreate-1550567400
Events:
  Type    Reason            Age   From                Message
  ----    ------            ----  ----                -------
  Normal  SuccessfulCreate  23m   cronjob-controller  Created job scheduled-pods-recreate-1550567400

So first thing, how do I ensure that it is running so the pods are recreated?

Also how can I ensure no downtime?

The updated version of the cronjob:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
          restartPolicy: OnFailure

The pods are not starting with the message Back-off restarting failed container and error as given below:

State:          Terminated
      Reason:       Error
      Exit Code:    127
Chillax
  • 4,418
  • 21
  • 56
  • 91

2 Answers2

16

Starting with Kubernetes 1.15, you use the following command to perform rolling restart.

kubectl rollout restart deployment <deployment name>
3

There is no rolling-restart functionality in Kubernetes at the moment, but you can use the following command as a workaround to restart all pods in the specific deployment:
(replace deployment name and pod name with the real ones)

kubectl patch deployment mydeployment -p '{"spec":{"template":{"spec":{"containers":[{"name":"my-pod-name","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}'

To schedule it, you can create a cron task on the master node to run this command periodically.
User owned the task should have correct kubectl configuration (~/.kube/config) with permissions to change the mentioned deployment object.

Default cluster admin configuration can be copied from /etc/kubernetes/admin.conf :
(it is usually created by kubeadm init):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Two types of deployment update strategy can be specified: Recreate (.spec.strategy.type==Recreate.) and Rolling update (.spec.strategy.type==RollingUpdate).

Only by using Rolling Update strategy you can avoid service downtime. You can specify maxUnavailable and maxSurge parameters in the deployment YAML to control the rolling update process.

VAS
  • 8,538
  • 1
  • 28
  • 39
  • I would prefer to have this as part of the helm chart itself and so as a CronJob yaml file – Chillax Feb 19 '19 at 13:29
  • This works though, so if I have the strategy set to RollingUpdate and do a patch deployment, does it do a rolling update ? – Chillax Feb 19 '19 at 14:32
  • Yes, when the container specification changes, deployment starts restarting pods. – VAS Feb 19 '19 at 14:35
  • You can also run kubectl patch command from a CronJob, you just need to access cluster api-server from the CronJob Pod with kubectl or curl inside. For kubectl you need correct config, for curl you need authentication token. You can find a good example in the answer to this question: https://stackoverflow.com/questions/42642170/kubernetes-how-to-run-kubectl-commands-inside-a-container – VAS Feb 19 '19 at 15:30
  • So I have decided to use the patch deployment, but can't really get it to run as a cronjob - updated my question too - am I missing something ? – Chillax Feb 26 '19 at 21:25
  • I doubt you have kubectl binary inside the default busybox image. But I'm pretty sure you can find curl inside it. If you want to try kubectl you can use bitnami/kubectl or lachlanevenson/k8s-kubectl – VAS Feb 27 '19 at 15:23