I have a cronjob that keeps restarting, despite its RestartPolicy
set to Never
:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cron-zombie-pod-killer
spec:
schedule: "*/9 * * * *"
successfulJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
name: cron-zombie-pod-killer
spec:
containers:
- name: cron-zombie-pod-killer
image: bitnami/kubectl
command:
- "/bin/sh"
args:
- "-c"
- "kubectl get pods --all-namespaces --field-selector=status.phase=Failed | awk '{print $2 \" --namespace=\" $1}' | xargs kubectl delete pod > /dev/null"
serviceAccountName: pod-read-and-delete
restartPolicy: Never
I would expect it to run every 9th minute, but that's not the case. What happens is that when there are pods to clean up (so, when there's smth to do for the pod) it would run normally. Once everything is cleared up, it keeps restarting -> failing -> starting, etc. in a loop every second.
Is there something I need to do to tell k8s that the job has been successful, even if there's nothing to do (no pods to clean up)? What makes the job loop in restarts and failures?