10

I have a dockerfile that sets up NGINX, PHP, adds a Wordpress Repository. I want at boot time, to start PHP and NGINX. However, I am failing to do so. I tried adding the two commands in the CMD array, and I also tried to put them in a shell file and starting the shell file. Nothing worked. Below is my Dockerfile

FROM ubuntu:16.04

WORKDIR /opt/

#Install nginx
RUN apt-get update
RUN apt-get install -y nginx=1.10.* php7.0 php7.0-fpm php7.0-mysql

#Add the customized NGINX configuration
RUN rm -f /etc/nginx/nginx.conf
RUN rm -f /etc/nginx/sites-enabled/*

COPY nginx/nginx.conf /etc/nginx/
COPY nginx/site.conf /etc/nginx/sites-enabled

#Copy the certificates
RUN mkdir -p /etc/pki/nginx
COPY nginx/certs/* /etc/pki/nginx/
RUN rm -f /etc/pki/nginx/placeholder

#Copy the build to its destination on the server
RUN mkdir -p /mnt/wordpress-blog/
COPY . /mnt/wordpress-blog/

#COPY wp-config.php
COPY nginx/wp-config.php /mnt/wordpress-blog/

#The command to run the container
CMD ["/bin/bash", "-c", "service php7.0-fpm start", "service nginx start"]

I tried to put the commands in the CMD in a shell file, and run the shell file in the CMD command. It still didn't work. what am i missing?

Nicolas El Khoury
  • 5,867
  • 4
  • 18
  • 28
  • 1
    It could be helpful for you - https://stackoverflow.com/questions/49090469/docker-alpine-linux-running-2-programs – nickgryg Apr 03 '18 at 13:24
  • The TL;DR for the above comment is "Use supervisord". It is a program which is responsible for running other programs. You set up a config file to tell it which programs to run, and then you invoke supervisord with CMD in your Dockerfile. It's a bit more polished version of using a shell script to run multiple programs. – Charles Wood Aug 28 '23 at 23:47

3 Answers3

9

start.sh

#!/bin/bash

/usr/sbin/service php7.0-fpm start
/usr/sbin/service nginx start
tail -f /dev/null

Dockerfile

COPY ["start.sh", "/root/start.sh"]
WORKDIR /root
CMD ["./start.sh"]

With this, you can put more complex logic in start.sh.

atline
  • 28,355
  • 16
  • 77
  • 113
  • 1
    this is anti-pattern. you should not advise people to use it. and it may event cause whole system to crash as you suggest to run an endless loop! – Yarimadam Apr 03 '18 at 13:52
  • I agree with you, so how about `tail -f /dev/null`? – atline Apr 03 '18 at 14:04
  • 1
    it will work probably. but still not a best practise. because when one of your depenencies fail -nginx for example- you will not know. container won't restart. so this is against the idea behind the dockerization. – Yarimadam Apr 03 '18 at 14:19
  • 1
    No, we could do more complex logic here. Before `tail -f /dev/null`, you can do any logic, maybe to see if the nginx is ok, if the return code, is ok, if the 80 port is open, just as you like. If not ok, you can directly exit 1. BTW, what's the `idea behind the dockerization?` – atline Apr 03 '18 at 14:22
  • Yes you can. But what if application fails after your checks? Boom - you are doomed. The idea behind the dockerization is single responsibility per container. Container that is well suited, optimized and packed for single responsibilit. If you tightly couple your dependencies you will have more trouble managing, monitoring and updating the processes individually. And i didn't even mention about scaling the application. – Yarimadam Apr 03 '18 at 14:47
  • This worked for me. However, I am concerned about what happens if the nginx or php fails? – Nicolas El Khoury Apr 03 '18 at 14:58
  • Maybe add `-e` behinds the `#!/bin/bash` is an easy way to fix? So we do not need to check return value? –  Apr 03 '18 at 15:04
  • @Yarimadam Yes, you are true, this solution can just check if the service failes at the start; if the service crash later, this solution can not handle. With `single responsibility per container` we can make the service at frontground, so it can be easy to handle the later crash of application. But here, OP really has 2 services, we can only put one service in frontground. Really do not know how to make this solution perfect, maybe `supervisord` could help? I did not have deep understanding about [supervisord](http://supervisord.org/), maybe OP could have time to try this. – atline Apr 03 '18 at 15:18
  • @gray, yes, I believe it can reduce the check code. – atline Apr 03 '18 at 15:20
  • I do follow the one "single responsibility per container". Each one of my microservices is in a single container. however, in order to server php content, I am required to turn on PHP and Nginx. Is there a better solution? – Nicolas El Khoury Apr 04 '18 at 07:47
  • @NicolasElKhoury I think you have 2 services: phpfpm & nginx. I am not familar with phpfpm, I believe it is not just php, it something like a traffic router? How your phpfpm communicate with your nginx, something like `fastcgi_pass unix:/run/php/php7.0-fpm.sock;` or `fastcgi_pass 127.0.0.1:9000;`? If possible to separate it into 2 containers, then use volume to share `php7.0-fpm.sock` meanwhile add priviledged when start container or just use host port to transfer network trafic between containers? – atline Apr 04 '18 at 07:59
  • And if can bear 2 services in one container, I strongly suggest you try `supervisord`, it can handle a lot of sub process, and monitor all sub-process for you. – atline Apr 04 '18 at 08:04
2

You can replace the CMD line for some like ...

CMD ["/bin/bash", "-c", "/usr/sbin/service php7.0-fpm start && nginx -g 'daemon off;'"]

SilvioQ
  • 1,972
  • 14
  • 26
  • It did not work – Nicolas El Khoury Apr 03 '18 at 13:21
  • I changed the service nginx by direct nginx command line ... you can check it in official nginx Dockerfile https://github.com/nginxinc/docker-nginx/blob/4f5bae5928baee89433ecb20a50283546f217dfa/mainline/stretch/Dockerfile ... But I think @atline response is better – SilvioQ Apr 03 '18 at 13:37
0

TL;DR: You don't have an entry point.

Main idea in the Docker is to have one responsibility per container. So, in order to keep running a Docker container you have to start a program in foreground upon container boot.

However, in your Dockerfile, there is no entrypoint to start a program in foreground. So, just after your container boot, your container exits.

So, in order to prevent your container from exiting, just start a program in foreground.

Nginx for instance.

Example scenario:

entrypoint.sh content:

#!/bin/bash
service php7.0-fpm start
nginx -g 'daemon off;

somewhere in Dockerfile:

COPY [ "./entrypoint.sh", "/root/entrypoint.sh" ]

at the end of the Dockerfile:

ENTRYPOINT /root/entrypoint.sh
Yarimadam
  • 1,128
  • 9
  • 12
  • 1
    I did what you said. It makes total sense. however, when I exec'ed into the container, NGINX was running while php-fpm was not. I would like php-fpm to run when the container starts – Nicolas El Khoury Apr 03 '18 at 14:28
  • I didn't test it but maybe there is a problem with this instruction: CMD ["/bin/bash", "-c", "service php7.0-fpm start"] – Yarimadam Apr 03 '18 at 14:38
  • The solution is wrong, if CMD and ENTRYPOINT both specified, the CMD will act as the parameter for ENTRYPOINT, so php will never be run. Suggest have a look at docker official guide :) –  Apr 03 '18 at 15:03
  • Good catch. I actually never specify entrypoint in Dockerfile. I use it in compose file. OP may try with RUN instruct instead. – Yarimadam Apr 03 '18 at 15:45