3

I am creating an app in Origin 3.1 using my Docker image.

Whenever I create image new pod gets created but it restarts again and again and finally gives status as "CrashLoopBackOff".

I analysed logs for pod but it gives no error, all log data is as expected for a successfully running app. Hence, not able to determine the cause.

I came across below link today, which says "running an application inside of a container as root still has risks, OpenShift doesn't allow you to do that by default and will instead run as an arbitrary assigned user ID."

What is CrashLoopBackOff status for openshift pods?

Here my image is using root user only, what to do to make this work? as logs shows no error but pod keeps restarting.

Could anyone please help me with this.

Community
  • 1
  • 1
priyank
  • 857
  • 2
  • 18
  • 35
  • 1
    Did you use the ``-p`` or ``--previous`` flag to ``oc logs`` to see if the logs from the previous attempt to start the pod show anything? Looking only at latest in this situation may not result in your capturing the issue. Does your application even log to stdout so logs would be captured? – Graham Dumpleton Mar 02 '16 at 23:57

4 Answers4

2

You are seeing this because whatever process your image is starting isn't a long running process and finds no TTY and the container just exits and gets restarted repeatedly, which is a "crash loop" as far as openshift is concerned.

Your dockerfile mentions below :

ENTRYPOINT ["container-entrypoint"]

What actually this "container-entrypoint" doing ?

you need to check.

Did you use the -p or --previous flag to oc logs to see if the logs from the previous attempt to start the pod show anything

  • The container-entrypoint is part of standard OpenShift S2I builders. All it does is set some environment variables and potentially activate SCL packages and then execute whatever is defined by CMD. This should not be the source of any problems. – Graham Dumpleton Mar 26 '16 at 22:45
  • Hi @Jaspreet, thanks for response here. Actually I am running a Play Java app. Below is the content of Dockerfile: ENTRYPOINT ["activator","start"] I tried seeing logs using -p, but it doesn't show any error. Server starts fine but immediately it exits and starts whole build process again and again and finally crashes. Though same Dockerfile works fine using just "docker run" outside of openshift. Any idea what could be the issue in OpenShift? – priyank Mar 28 '16 at 04:51
1

The recommendation of Red Hat is to make files group owned by GID 0 - the user in the container is always in the root group. You won't be able to chown, but you can selectively expose which files to write to.

A second option: In order to allow images that use either named users or the root (0) user to build in OpenShift, you can add the project’s builder service account (system:serviceaccount::builder) to the privileged security context constraint (SCC). Alternatively, you can allow all images to run as any user.

lvthillo
  • 28,263
  • 13
  • 94
  • 127
  • Hi, thanks for the response here, after modifying SCC, I am able to build app from Dockerfile which runs as a root user, but problem now it container gets restarted again and again without giving any error in logs and finally "CrashloopBackOff " status shows. How to run the container in openshift as Daemon? why its happening? any pointers on this will be very helpful. thanks a lot again! – priyank Mar 03 '16 at 12:03
0

Can you see the logs using

kubectl logs <podname> -p 

This should give you the errors why the pod failed.

Mr Lister
  • 45,515
  • 15
  • 108
  • 150
cloudnoob
  • 85
  • 8
  • Hi, Thanks , I checked using "openshift kube logs -p " , but there are no errors in logs, Since yesterday my container has restarted 218 times without throwing any error. can you help? thanks again! – priyank Mar 04 '16 at 12:58
0

I am able to resolve this by creating a script as "run.sh" with the content at end:

while :; do
sleep 300
done

and in Dockerfile:

ADD run.sh /run.sh
RUN chmod +x /*.sh
CMD ["/run.sh"]

This way it works, thanks everybody for pointing out the reason ,which helped me in finding the resolution. But one doubt I still have why process gets exited in openshift in this case only, I have tried running tomcat server in the same way which just works fine without having sleep in script.

priyank
  • 857
  • 2
  • 18
  • 35
  • When OpenShift kills your process, it expects your run.sh will die and that will result in a clean shutdown. Your actual running process may cause corruption in this scenario as it may not shutdown cleanly. That's app specific, but certainly worth consideration. – Josiah Nov 04 '19 at 15:51