I have a simple docker image which is working fine locally. It is basically the same as the example on apache's httpd page.
FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/
As per the page example, I can build and run my image as follows:
$ docker build -t gcr.io/${PROJECT_ID}/hello-app:v1 .
$ docker run -dit --name my-running-app -p 8080:80 <img_id>
I then head over to http://localhost:8080 , and everything seems to be working as it should.
However, when I try to create a deployment for my Google Cloud Kubernetes instance, my pod fails and gets to the state of CrashLoopBackOff
. (This is after I have pushed the image to Google Cloud Registry, so that the deployment may grab the image from there.)
I think that this CrashLoopBackOff
problem is happening due to me not having an ENTRYPOINT
to my container; ie, the pod spawns, no command is issued, and then it is completed and crashes.
I have 2 questions then:
- What command should I add to my Dockerfile to get the http server up and running on the pod (assuming my assessment of the problem is indeed correct)?
- How is this running locally? Locally I simply
$ docker run -dit --name my-running-app -p 8080:80 <img_id>
. I do not specify that the container should run httpd, yet it does? How is this happening?
Edit - additional information:
I deployed onto K8's by doing the following:
$ kubectl create deployment hello-app --image=gcr.io/${PROJECT_ID}/hello-app:v1
Kubectl logs:
$ kubectl logs <pod_name>
standard_init_linux.go:211: exec user process caused "exec format error"
kubectl describe:
$ kubectl describe pod hello-app-6b89cd98f6-gn65p
Name: <name>
Namespace: default
Priority: 0
Node: <my_node>
Start Time: Mon, 22 Mar 2021 12:32:51 +0200
Labels: app=hello-app
pod-template-hash=6b89cd98f6
Annotations: <none>
Status: Running
IP: 10.12.1.13
IPs:
IP: 10.12.1.13
Controlled By: <replica_set>
Containers:
hello-app:
Container ID: <cid>
Image: <img>
Image ID: <img_id>
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 22 Mar 2021 15:12:18 +0200
Finished: Mon, 22 Mar 2021 15:12:18 +0200
Ready: False
Restart Count: 36
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-b8p9t (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-b8p9t:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-b8p9t
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 4m9s (x741 over 164m) kubelet Back-off restarting failed container