0

I am having issues with setting up a multi docker container environment. The idea is pretty standard:

  • One container have php-fpm running
  • Another is a nginx proxy

My phpfpm Docker file is as simple as:

FROM php:7.0-fpm

# install the PHP extensions we need
RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \
    && docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
    && docker-php-ext-install gd mysqli opcache

# set recommended PHP.ini settings
# see https://secure.php.net/manual/en/opcache.installation.php
RUN { \
        echo 'opcache.memory_consumption=128'; \
        echo 'opcache.interned_strings_buffer=8'; \
        echo 'opcache.max_accelerated_files=4000'; \
        echo 'opcache.revalidate_freq=2'; \
        echo 'opcache.fast_shutdown=1'; \
        echo 'opcache.enable_cli=1'; \
    } > /usr/local/etc/php/conf.d/opcache-recommended.ini

VOLUME /var/www/html

CMD ["php-fpm"]

and Nginx is even more so:

FROM nginx

COPY conf.d/* /etc/nginx/conf.d/

Where inside the conf.d folder is a single file default.conf

server {
    listen 80;
    server_name priz-local.com;
    root /var/www/html;

    index index.php;

    location / {
        proxy_pass  http://website:9000;
        proxy_set_header   Connection "";
        proxy_http_version 1.1;
        proxy_set_header        Host            $host;
        proxy_set_header        X-Real-IP       $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

And docker-compose.yml

website:
  build: ./website/
  ports:
   - "9000:9000"
  container_name: website
  external_links:
     - mysql:mysql
nginx-proxy:
  build: ./proxy/
  ports:
    - "8000:80"
  container_name: proxy
  links:
       - website:website

This exact setup works perfectly on AWS Elastic Beanstalk. However, on my local docker I am getting errors such as:

2016/11/17 09:55:36 [error] 6#6: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1, server: priz-local.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:9000/", host: "priz-local.com:8888" 172.17.0.1 - - [17/Nov/2016:09:55:36 +0000] "GET / HTTP/1.1" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36" "-"

UPDATE If I log into the proxy container and try curl to the other one I am getting this:

root@4fb46a4713a8:/# curl http://website
curl: (7) Failed to connect to website port 80: Connection refused
root@4fb46a4713a8:/# curl http://website:9000
curl: (56) Recv failure: Connection reset by peer

Another thing I tried is:

server {
    listen 80;
    server_name priz-local.com;
    root /var/www/html;

    #index index.php;
    #charset UTF-8;

    #gzip on;
    #gzip_http_version 1.1;
    #gzip_vary on;
    #gzip_comp_level 6;
    #gzip_proxied any;
    #gzip_types text/plain text/xml text/css application/x-javascript;

    location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
    }

    location /nginx_status {
        stub_status on;
        access_log off;
    }

    location / {
        try_files $uri $uri/ /index.php?q=$uri&$args;
    }

    location ~ \.php$ {

        set $nocache "";
        if ($http_cookie ~ (comment_author_.*|wordpress_logged_in.*|wp-postpass_.*)) {
           set $nocache "Y";
        }

        fastcgi_pass  website:9000;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_intercept_errors on;
        include fastcgi_params;

        #fastcgi_cache_use_stale error timeout invalid_header http_500;
        #fastcgi_cache_key $host$request_uri;
        #fastcgi_cache example;
        #fastcgi_cache_valid 200 1m;
        #fastcgi_cache_bypass $nocache;
        #fastcgi_no_cache $nocache;
    }

    location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
        allow all;
        expires max;
        log_not_found off;

        fastcgi_pass  wordpress:9000;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_intercept_errors on;
        include fastcgi_params;
    }
}

The site started to work, but all the resources (js|css|png|jpg|jpeg|gif|ico) are now returning 403.

What am I missing?

Shurik Agulyansky
  • 2,607
  • 2
  • 34
  • 76
  • let me guesss. your local env is mac? – cari Nov 17 '16 at 10:10
  • Yes, not sure if you refer to it as a general problem... – Shurik Agulyansky Nov 17 '16 at 10:22
  • `Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_1)` not much of a guess :p – johnharris85 Nov 17 '16 at 10:28
  • this is the client, not the server env, bro... so it was still a guess. whatever, i know that windows docker suffers from container dns not working like on linux, bc it runs on vm. you have to add the container name with the vm ip to the hosts file of the host system. dont know, maybe its the problem for mac, too? – cari Nov 17 '16 at 10:38
  • I do have an entry in hosts file. More than that, the url from host to nginx resolves properly. It's the communication between containers is a problem. – Shurik Agulyansky Nov 17 '16 at 16:57
  • How are you starting/linking your nginx and php containers? – Roman Nov 17 '16 at 20:31
  • @R0MANARMY Yes, I attached docker-compose to the post. – Shurik Agulyansky Nov 17 '16 at 20:36
  • I'm not that familiar with Nginx, but does it matter that php files go to `website:9000` and all the other resources go to `wordpress:9000`? Doesn't look like `wordpress` alias is set anywhere. – Roman Nov 17 '16 at 20:40
  • @R0MANARMY Well, the wordpress was one of my trials, since I know it works on AWS. However, the only place I tried using it is on a second configuration that actually worked, but gave 403 on css & js. I just rechecked again, and I don't have wordpress anywhere any more. – Shurik Agulyansky Nov 17 '16 at 20:45
  • I mean should both of them be `website:9000` or is your static content actually hosted somewhere else? – Roman Nov 17 '16 at 20:47
  • Yes, they are. and the content is hosted on linked container. – Shurik Agulyansky Nov 17 '16 at 20:48
  • Added another update to the post. – Shurik Agulyansky Nov 17 '16 at 20:50
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/128397/discussion-between-r0manarmy-and-shurik-agulyansky). – Roman Nov 17 '16 at 20:52

1 Answers1

1

After a very long chat with R0MANARMY and a lot of his help, I think I finally understood the root of the problem.

The main issue here is the fact that I was not using docker as it was intended to work.

Another cause is the fact that fpm is not a webserver, and the only way to proxy into it is through fastcgi (or maybe not the only, but simple proxy_pass does not work in this case).

So, the correct way of setting it up is:

  1. mounting the code volume into both containers.
  2. configure fastcgi for php scripts through nginx into php container
  3. configure virtual host to serve static assets directly by nginx.

Here are couple of examples of how to do it:

http://geekyplatypus.com/dockerise-your-php-application-with-nginx-and-php7-fpm/

https://ejosh.co/de/2015/08/wordpress-and-docker-the-correct-way/

UPDATE Adding the actual solution that worked for me:

For faster turnaround, I decided to user docker-compose and docker-compose.yml looks like this:

website:
  build: ./website/
  container_name: website
  external_links:
    - mysql:mysql
  volumes:
    - ~/Dev/priz/website:/var/www/html
  environment:
    WORDPRESS_DB_USER: **
    WORDPRESS_DB_PASSWORD: ***
    WORDPRESS_DB_NAME: ***
    WORDPRESS_DB_HOST: ***
proxy:
  image: nginx
  container_name: proxy
  links:
    - website:website
  ports:
    - "9080:80"
  volumes:
    - ~/Dev/priz/website:/var/www/html
    - ./deployment/proxy/conf.d/default.conf:/etc/nginx/conf.d/default.conf

Now, the most important piece of information here is the fact that I am mounting exactly the same code to both containers. The reason for that, is because fastcgi cannot serve static files (at least as far as I understand), so the idea is to serve then directly through nginx.

My default.conf file looks like this:

server {
    listen 80;
    server_name localhost;
    root /var/www/html;

    index index.php;

    location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
    }

    location /nginx_status {
        stub_status on;
        access_log off;
    }

    location / {
        try_files $uri $uri/ /index.php?q=$uri&$args;
    }

    location ~ \.php$ {
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass website:9000;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        #fastcgi_param PATH_INFO $fastcgi_path_info;
        fastcgi_intercept_errors on;
        include fastcgi_params;
    }
}

So, this config, proxies through php request to be handled by fpm container, while everything else is taken from locally mounted volume.

That's it. I hope it will help someone.

The only couple of issues with it:

  1. ONLY sometimes http://localhost:9080 downloads index.php file instead of executing it
  2. cURL'ing from php script to outside world, takes really long time not sure how to even debug this, at this point.
Community
  • 1
  • 1
Shurik Agulyansky
  • 2,607
  • 2
  • 34
  • 76