38

I have a need to combine the php-fpm with nginx in one dockerfile for production deployment.

So is it better to :

(1) Start the dockerfile using php:7.1.8-fpm and then install nginx image layer on top of it ?

(2) Or do you recommend using nginx image and then installing php-fpm using apt-get ?

PS: I do not have a docker-compose build option for production deployment. On my development environment, I already use docker-compose and build multi-container app easily from two images. Our organization devops do not support docker-compose based deployment for prod environment.

Andy
  • 2,493
  • 6
  • 37
  • 63
  • What do you mean by "better"? What do you want to achieve? – Nico Haase Mar 11 '21 at 17:21
  • I did not understand why this question was closed in such a hurry. OK. Better here is which will be shortest docker file that does not need too many dockerfile statements and also which will reduce the container size. People who do understand docker container will have no problem understanding definition of better means IMHO. – Andy Mar 15 '21 at 16:06

2 Answers2

77

Nginx installation is much easier than PHP so it should be easier for you to install Nginx into a ready-to-use official PHP image. Here is an example of a Dockerfile showing how your goal can be reached with an example of installing a few PHP extensions:

FROM php:7.2-fpm

RUN apt-get update -y \
    && apt-get install -y nginx

# PHP_CPPFLAGS are used by the docker-php-ext-* scripts
ENV PHP_CPPFLAGS="$PHP_CPPFLAGS -std=c++11"

RUN docker-php-ext-install pdo_mysql \
    && docker-php-ext-install opcache \
    && apt-get install libicu-dev -y \
    && docker-php-ext-configure intl \
    && docker-php-ext-install intl \
    && apt-get remove libicu-dev icu-devtools -y
RUN { \
        echo 'opcache.memory_consumption=128'; \
        echo 'opcache.interned_strings_buffer=8'; \
        echo 'opcache.max_accelerated_files=4000'; \
        echo 'opcache.revalidate_freq=2'; \
        echo 'opcache.fast_shutdown=1'; \
        echo 'opcache.enable_cli=1'; \
    } > /usr/local/etc/php/conf.d/php-opocache-cfg.ini

COPY nginx-site.conf /etc/nginx/sites-enabled/default
COPY entrypoint.sh /etc/entrypoint.sh

COPY --chown=www-data:www-data . /var/www/mysite

WORKDIR /var/www/mysite

EXPOSE 80 443

ENTRYPOINT ["/etc/entrypoint.sh"]

The nginx-site.conf file contains your Nginx http host configuration. The example below is for a Symfony app:

server {
    root    /var/www/mysite/web;

    include /etc/nginx/default.d/*.conf;

    index app.php index.php index.html index.htm;

    client_max_body_size 30m;

    location / {
        try_files $uri $uri/ /app.php$is_args$args;
    }

    location ~ [^/]\.php(/|$) {
        fastcgi_split_path_info ^(.+?\.php)(/.*)$;
        # Mitigate https://httpoxy.org/ vulnerabilities
        fastcgi_param HTTP_PROXY "";
        fastcgi_pass 127.0.0.1:9000;
        fastcgi_index app.php;
        include fastcgi.conf;
    }
}

The entrypoint.sh will run Nginx and php-fpm on container startup (otherwise only php-fpm will be started as the default action of the official PHP image):

#!/usr/bin/env bash
service nginx start
php-fpm

Of course, this is not the best way from the best practice perspective, but I hope this is the answer to your question.

Update:

If you get the permission denied error on the entrypoint.sh file, check that this file has the executable permission if you're building from under Linux, or add the RUN chmod +x /etc/entrypoint.sh to the Dockerfile if you're under Windows (all files from under Windows are copied without the executable permission to the container).

If you're running under Google Cloud Run, keep in mind that Nginx startups before PHP and it does that much quicker than PHP. This leads to the issue that when Cloud Run sends the first request, it comes at the moment when Nginx is already initialized, but PHP-FPM is not yet and Cloud Run request fails. To fix that, you should change your entrypoint to startup PHP-FPM before Nginx:

#!/usr/bin/env sh
set -e

php-fpm -D
nginx -g 'daemon off;'

This script is tested under Alpine Linux only. I guess it should also work on other images. This script runs php-fpm first in the background, and then Nginx without exiting. In this way, Nginx always starts listening to ports after PHP-FPM is initialized.

Alexander Pravdin
  • 4,982
  • 3
  • 27
  • 30
  • 7
    Using `ENTRYPOINT ["sh", "/etc/entrypoint.sh"]` would avoid getting a **permission denied** error. – AymDev Jul 15 '19 at 15:55
  • 7
    and adding "RUN chmod +x /etc/entrypoint.sh" after the COPY will prevent the permissions error. – Hutch Nov 12 '19 at 11:20
  • 1
    You said that it's not the best practice to do this. Can you recommend another way of doing this? – GTHell May 30 '20 at 11:03
  • 2
    The best practice is to keep one service per single container. Mixing multiple services inside a single container may lead to issues supporting such dockerfile in the long-term. I didn't mean there is a better way to implement the subject question. I meant that combining is not a perfect way. – Alexander Pravdin Jun 15 '20 at 13:55
  • Is it any way to off this commend - COPY --chown=www-data:www-data . /var/www/mysite. Because my application size over 4GB, so my image getting bigger – Hasan Hafiz Pasha Oct 01 '20 at 15:16
  • I used your example and used supervisord to run both services . Cheers ! – TudorIftimie Nov 14 '20 at 18:17
  • 5
    While this may not be a "best" docker practice, in certain environments where you can only deploy a single container (like google cloud run) it's the only way to go. – Robert Moskal Mar 10 '21 at 14:09
  • @Robert Moskal, Agreed. On Cloud Run, it's the only way. – Alexander Pravdin Mar 11 '21 at 17:01
  • The permission denied entrypoint error is usually caused by the missing executable permission in your local Linux filesystem or when you build from under Windows. – Alexander Pravdin Mar 11 '21 at 17:02
-9

You should deploy two container, one with fpm, the other with nginx, and you should link them. Even though you can use supervisor in order to monitore multiple processes within the same container, Docker philosophy is to have one process per container.

Something like:

docker run --name php -v ./code:/code php:7-fpm
docker run --name nginx -v ./code:/code -v site.conf:/etc/nginx/conf.d/site.conf --link php nginx:latest

With site.conf with

server {
    index index.php index.html;
    server_name php-docker.local;
    error_log  /var/log/nginx/error.log;
    access_log /var/log/nginx/access.log;
    root /code;

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass php:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
    }
}

(Shamefully inspired by http://geekyplatypus.com/dockerise-your-php-application-with-nginx-and-php7-fpm/)

Blusky
  • 3,470
  • 1
  • 19
  • 35
  • The issue is that our DevOps wants just one single docker file and reuse their existing deployment stack script that only do the docker build once and then docker run. The rule is that one single service should have one single dockerfile. If I have to run the multiple containers like you mentioned above, its just simpler to use docker-compose (which I have for development ... but production env is a different matter). – Andy Sep 21 '17 at 00:00
  • 1
    When I run both php-fpm and nginx in one container, the fastcgi_pass will be pointing to either unix sock file or 127.0.0.1:9000 – Andy Sep 21 '17 at 00:02
  • 1
    If you really need to use only one image, you should start from `debian` or `alpine`, install both `nginx` and `php-fpm`, and run `supervisord`. Another possiblity is to use `apache` that does not need another process to use PHP. – Blusky Sep 21 '17 at 14:51
  • Although indeed this is best practice, OP wanted to combine the two containers into one. Unfortunately, in certain environments thats the only viable route. – Tamas Kalman Mar 05 '23 at 12:00