I am new to Kubernetes and trying to learn but I am stuck with an error that I cannot find an explanation for. I am running Pods and Deployments in my cluster and they are running perfectly as shown in the CLI, but after a while they keep crashing and the Pods need to restart.
I did some research to fix my issue before posting here, but the way I understood it, I will have to make a deployment so that my replicaSets will manage my Pods lifecycle and not deploy Pods independently. But as you can see also Pods in deployment is crashing.
kubectl get pods
operator-5bf8c8484c-fcmnp 0/1 CrashLoopBackOff 9 34m
operator-5bf8c8484c-phptp 0/1 CrashLoopBackOff 9 34m
operator-5bf8c8484c-wh7hm 0/1 CrashLoopBackOff 9 34m
operator-pod 0/1 CrashLoopBackOff 12 49m
kubectl describe pods operator
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/operator-pod to workernode
Normal Created 30m (x5 over 34m) kubelet, workernode Created container operator-pod
Normal Started 30m (x5 over 34m) kubelet, workernode Started container operator-pod
Normal Pulled 29m (x6 over 34m) kubelet, workernode Container image "operator-api_1:java" already present on machine
Warning BackOff 4m5s (x101 over 33m) kubelet, workernode Back-off restarting failed container
deployment yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: operator
labels:
app: java
spec:
replicas: 3
selector:
matchLabels:
app: call
template:
metadata:
labels:
app: call
spec:
containers:
- name: operatorapi
image: operator-api_1:java
ports:
- containerPort: 80
Can someone help me out, how can I debug?