Having recently updated to version 1.22.1, we are now experiencing an issue where the existing cron jobs are no longer deleting pods once they are complete. i have tried adding the following:
successfulJobsHistoryLimit: 3 failedJobsHistoryLimit: 5 to the yaml, but has had no effect.
I have also used a simple cron job example:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
after 5 minutes, all the pods are still there
kubectl get pods
hello-27304804--1-q545h 0/1 Completed 0 5m21s
hello-27304805--1-b6ksd 0/1 Completed 0 4m21s
hello-27304806--1-dsvb7 0/1 Completed 0 3m21s
hello-27304807--1-bqnjg 0/1 Completed 0 2m21s
hello-27304808--1-dsv6p 0/1 Completed 0 81s
hello-27304809--1-99cx4 0/1 Completed 0 21s
kubectl get jobs
NAME COMPLETIONS DURATION AGE
hello-27304828 1/1 1s 2m59s
hello-27304829 1/1 2s 119s
hello-27304830 1/1 2s 59s