I am running this Cronjob at 2 AM in the morning:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: postgres-backup
spec:
# Backup the database every day at 2AM
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: postgres-backup
image: postgres:10.4
command:
- "/bin/sh"
- -c
- |
pg_dump -Fc -d postgresql://$DBUSER:$DBPASS@$DBHOST:$DBPORT/$DBNAME > /var/backups/backup_$(date +"%d-%m-%Y_%H-%M").bak;
env:
- name: DBHOST
valueFrom:
configMapKeyRef:
name: dev-db-config
key: db_host
- name: DBPORT
valueFrom:
configMapKeyRef:
name: dev-db-config
key: db_port
- name: DBNAME
valueFrom:
configMapKeyRef:
name: dev-db-config
key: db_name
- name: DBUSER
valueFrom:
secretKeyRef:
name: dev-db-secret
key: db_username
- name: DBPASS
valueFrom:
secretKeyRef:
name: dev-db-secret
key: db_password
volumeMounts:
- mountPath: /var/backups
name: postgres-backup-storage
- name: postgres-restore
image: postgres:10.4
volumeMounts:
- mountPath: /var/backups
name: postgres-backup-storage
restartPolicy: OnFailure
volumes:
- name: postgres-backup-storage
hostPath:
# Ensure the file directory is created.
path: /var/volumes/postgres-backups
type: DirectoryOrCreate
The jobs are getting executed successfully, but what I don't like is that for every Job execution a new Pod is created:
Is there a way to clean previous (old) created Pods? Or maybe is there a way to rerun one an the same Pod/Job every time?