Strangely, one pod in kubernetes cluster crashes but other doesn't!
codingjediweb-6d77f46b56-5mffg 0/1 CrashLoopBackOff 3 81s
codingjediweb-6d77f46b56-vcr8q 1/1 Running 0 81s
They should both have same image and both should work. What could be reason?
I suspect that the crashing pod has old image but I don't know why. Its because I fixed an issue and expected the code to work (which is on one of the pods).
Is it possible that different pods have different images? Is there a way to check which pod is running which image? Is there a way to "flush" an old image or force K8S to download even if it has a cache?
UPDATE
After Famen's suggestion, I looked at the image. I can see that for the crashing container seem to be using an existing image (which might be old). How can I make K8S always pull an image?
manuchadha25@cloudshell:~ (copper-frame-262317)$ kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 1 2d1h
codingjediweb-6d77f46b56-5mffg 0/1 CrashLoopBackOff 10 29m
codingjediweb-6d77f46b56-vcr8q 1/1 Running 0 29m
manuchadha25@cloudshell:~ (copper-frame-262317)$ kubectl describe pod codingjediweb-6d77f46b56-vcr8q | grep image
Normal Pulling 29m kubelet, gke-codingjediweb-cluste-default-pool-69be8339-wtjt Pulling image "docker.io/manuchadha25/codingjediweb:08072020v3"
Normal Pulled 29m kubelet, gke-codingjediweb-cluste-default-pool-69be8339-wtjt Successfully pulled image "docker.io/manuchadha25/codingjediweb:08072020v3"
manuchadha25@cloudshell:~ (copper-frame-262317)$ kubectl describe pod codingjediweb-6d77f46b56-5mffg | grep image
Normal Pulled 28m (x5 over 30m) kubelet, gke-codingjediweb-cluste-default-pool-69be8339-p5hx Container image "docker.io/manuchadha25/codingjediweb:08072020v3" already present on machine
manuchadha25@cloudshell:~ (copper-frame-262317)$
Also, the working pod has two entries for the image (pulling and pulled). Where are there two?