2

I have a Kubernetes Cluster where the same application is running a few times but with different namespaces. Imagine

ns=app1 name=app1
ns=app2 name=app2
ns=app3 name=app3
[...]
ns=app99 name=app99

Now I need to execute a cronjob every 10 minutes in all of those Pods. The path is the same everytime.

Is there a 'best way' to achieve this?

I was thinking of a kubectl image running as 'CronJob' kind and something like this:

kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image=="registry.local/app-v1")].image' | xargs -i kubectl exec {} /usr/bin/scrub.sh

but I am pretty sure this is not the right way to go about this.

LucidEx
  • 31
  • 5
  • 1
    Why not create a cronjob into the namespaces? – Arghya Sadhu Jul 22 '20 at 08:20
  • Because if I understand correctly the cronjob will spawn a pod everytime it runs which would add a lot of stress to the storage especially if you spawn 100 pods every 10 minutes. I'd really like to avoid that. – LucidEx Jul 22 '20 at 08:31
  • 1
    I would say the same as @ArghyaSadhu, even if you spawn 100 pods every 10 minutes they would do their work and then be removed. Take a look at this older [stackoverflow question](https://stackoverflow.com/questions/41192053/cron-jobs-in-kubernetes-connect-to-existing-pod-execute-script), similar question was asked here. As far as I checked here there is no ¨best way¨, one of the answers is very much like your thinking so I think this should work. – Jakub Jul 23 '20 at 10:44
  • 1
    Thanks @jt97 for the pointer to that question. I'll probably roll with a bash loop/routine instead of that python code snippet but will go with that approach. Concerning the storage it would be fine if it was some storage in a cloud I don't have to care about, but since it's a shared ceph storage with all it's overheads (especially ram and cpu) when you claim a volume and the need to have them zero'd on delete creating/deleting 100 storage claims every 10 minutes just isn't viable in my environment. – LucidEx Jul 23 '20 at 12:06
  • @LucidEx Sure, I will add this as an community wiki answer so if someone would look for similiar question he could find something useful here, if you have something to add please edit my answer and add something from yourself. – Jakub Jul 23 '20 at 13:55

1 Answers1

2

As mentioned by me and @Argha Sadhu one of the options would be to create cronjobs for all the pods, but it would generate 100 pods every 10 minutes, so as @LucidEx mentioned that would be fine with storage in the cloud, but not that much in his environment.

Concerning the storage it would be fine if it was some storage in a cloud I don't have to care about, but since it's a shared ceph storage with all it's overheads (especially ram and cpu) when you claim a volume and the need to have them zero'd on delete creating/deleting 100 storage claims every 10 minutes just isn't viable in my environment. – LucidEx


Another options can be found at this older stackoverflow question, similar question was asked here.

As @LucidEx mentioned

I'll probably roll with a bash loop/routine instead of that python code snippet but will go with that approach.

This python code snippet is here.

Jakub
  • 8,189
  • 1
  • 17
  • 31