0

I have a stateful set with persistent volume. If I do kubectl rollout restart statefulset pods are recreated and persistent volumes are reclaimed, as expected.

Hovewer, I need to create a cron job that once every now and then will do a rolling restart of pods and also will clean persistent volumes(or recreate them). So what I want to achieve is job stopping one pod at a time, cleaning/deleting its persistent volume and then creating a new pod(or reusing the same one) with empty pvc and only after that pod is up it will proceed to a next one.

Deleting stateful set and creating it again is not an option cause I need to have 0 downtime (assuming there is more then 1 pod in replica set).

One of the options I considered was to configure cron hob to first patch existing configuration so that it removes persistent volume during rollout restart, does an actual rollout restart and then patches it again with a revert change. However not sure which property should I change for that and also wanted to make sure there isn’t an easier way to achieve a similar behavior.

Ruslan Akhundov
  • 2,178
  • 4
  • 20
  • 38
  • Did you manage to find solution ? – Malgorzata Sep 09 '20 at 08:49
  • @Malgorzata not yet. – Ruslan Akhundov Sep 09 '20 at 08:50
  • What about adding an InitContaner to the StatefulSet template with the script that check some external condition (for ex: existence of the file podname.txt on NFS volume) and if it's `true` then script clears the Pod's PV and make the condition `false` (by deleting podname.txt on NFS volume). If condition is `false` script just exit with 0 code. Then you can create a CronJob that set the condition to `true` for all StatefulSet Pods and run `kubectl rollout restart` – VAS Nov 26 '20 at 15:42
  • @VASャ yes, that's something that resembles the approach I ended up using eventually. – Ruslan Akhundov Nov 26 '20 at 15:43
  • If it works well, could you please share your solution in another answer with more details? I would be glad to upvote it. – VAS Nov 26 '20 at 15:57

1 Answers1

0

If you have dynamic provisioning add

reclaimPolicy: Delete

to your StorageClass definition, or to every PVC you have. Delete is default ReclaimPolicy but make sure to not have different value under this flag. You can use kubectl patch command.

Statefulset have 4 update strategies.

  • On Delete
  • Rolling Updates
  • Partitions
  • Forced Rollback

Definition of Partition strategy:

If a partition is specified, all Pods with an ordinal that is greater than or equal to the partition will be updated when the StatefulSet’s .spec.template is updated. All Pods with an ordinal that is less than the partition will not be updated, and, even if they are deleted, they will be recreated at the previous version. If a StatefulSet’s .spec.updateStrategy.rollingUpdate.partition is greater than its .spec.replicas, updates to its .spec.template will not be propagated to its Pods. In most cases you will not need to use a partition, but they are useful if you want to stage an update, roll out a canary, or perform a phased roll out.

For example if somewhere in StatefulSet you have set updateStrategy.rollingUpdate.partition: 2 it will restart all pods with index 2 or higher.

Take a look: sts-rolling-update-strategy, sts-restart-rolling-update.

Malgorzata
  • 6,409
  • 1
  • 10
  • 27