I am new to both Docker and Kubernetes although I understand the basic concepts. I've been submitting a lot of Jobs to Kubernetes and have been trying to find a way to automatically delete the history (there are features to do that for CronJobs but not regular Jobs yet). I found a good answer here but I've been having trouble getting it to work.
Here is a basic Pod that I'm submitting, which is similar to the CronJob I will use once I am done testing. For now it only prints the names of the jobs to delete, but once I am done testing I will add | xargs kubectl delete job
to the end of the command to perform deletion. It is using this image which provides kubectl.
apiVersion: v1
kind: Pod
metadata:
name: cleanup-manual
spec:
containers:
- name: cleanup-manual-pod
image: wernight/kubectl
command: ["get jobs | awk '$4 ~ /^[2-9]d/ || $2 ~ /^1/ {print $1}'"]
When I run it, the pod exits with RunContainerError.
So I have a few questions:
- Is there anything I can check to see why the container failed?
kubectl logs [pod name]
doesn't seem to give me anything. - In the original answer that I am working off of, the command was
["sh", "-c", "kubectl get jobs | awk '$4 ~ /[2-9]d$/ || $3 ~ 1' | awk '{print $1}' | xargs kubectl delete job"]
. I removed the finalxargs
because I'm just testing right now, and fixed the awk command. I think that those two changes of mine are good, but I'm confused why the other command begins withsh -c kubectl
. If the entrypoint for the image iskubectl
, then isn't that superfluous? Basically I'd like to know if my command or the other command is better. - Anything else that you could provide to help me track down this error would be appreciated!