1

Having recently updated to version 1.22.1, we are now experiencing an issue where the existing cron jobs are no longer deleting pods once they are complete. i have tried adding the following:

successfulJobsHistoryLimit: 3 failedJobsHistoryLimit: 5 to the yaml, but has had no effect.

I have also used a simple cron job example:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

after 5 minutes, all the pods are still there

kubectl get pods

hello-27304804--1-q545h   0/1     Completed   0          5m21s
hello-27304805--1-b6ksd   0/1     Completed   0          4m21s
hello-27304806--1-dsvb7   0/1     Completed   0          3m21s
hello-27304807--1-bqnjg   0/1     Completed   0          2m21s
hello-27304808--1-dsv6p   0/1     Completed   0          81s
hello-27304809--1-99cx4   0/1     Completed   0          21s

kubectl get jobs

NAME             COMPLETIONS   DURATION   AGE
hello-27304828   1/1           1s         2m59s
hello-27304829   1/1           2s         119s
hello-27304830   1/1           2s         59s
Vüsal
  • 2,580
  • 1
  • 12
  • 31
cjw-k8
  • 11
  • 3
  • Can you post the output of `kubectl get jobs` (and format the output of `kubectl get pods`)? – weibeld Nov 30 '21 at 16:18
  • I've formatted the output hopefully now more readable. – cjw-k8 Nov 30 '21 at 16:32
  • check this: https://stackoverflow.com/questions/70156787/how-to-have-only-one-pod-created-for-by-cronjob/70156910 – Vüsal Nov 30 '21 at 17:13
  • Does this answer your question? [how to have only one Pod created for/by Cronjob](https://stackoverflow.com/questions/70156787/how-to-have-only-one-pod-created-for-by-cronjob) – Vüsal Nov 30 '21 at 17:14
  • Yes i have tried all the suggestions included in mentioned post, adding activeDeadlineSeconds, ttlSecondsAfterFinished and successfulJobsHistoryLimit - nothing seems to be work – cjw-k8 Dec 01 '21 at 11:29
  • same issue for me in 1.22 – Alex Weitz Oct 26 '22 at 03:03
  • Did anyone find a solution for this? We're on 1.27 and the jobs are correctly removed, just the pods are remaining – davidgiga1993 Jul 11 '23 at 12:18
  • Looks like this is a bug: https://github.com/kubernetes/kubernetes/issues/74741 I've built a utility to clean up the old pods: https://github.com/davidgiga1993/cronjob-pod-cleaner – davidgiga1993 Jul 11 '23 at 13:39
  • Found the root cause for us: Calico v3.26.0 causes the k8s gc to stop working: https://github.com/kubernetes/kubernetes/issues/118753 – davidgiga1993 Jul 12 '23 at 12:32

1 Answers1

-1

A workaround is setting a pod limit on a namespace via resourcequotas and sticking the cron jobs in there. Hope this helps! src : Unable to run .netcore app as kubernetes cronjob