22

I got error message:

FastCGI sent in stderr: "Unable to open primary script: /home/messi/web/wordpress/index.php (No such file or directory)" while reading response header from upstream, client: xxx.xxx.xxx.xxx, server: www.domain.com, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.domain.com

here are my configuration files:

/etc/php5/fpm/php.ini

cgi.fix_pathinfo=0
doc_root =
user_dir =
....

/etc/php5/fpm/php-fpm.conf

[global]
pid = /var/run/php5-fpm.pid
error_log = /var/log/php5-fpm.log
include=/etc/php5/fpm/pool.d/*.conf

/etc/php5/fpm/pool.d/www.conf

[www]
user = www-data
group = www-data
listen = /var/run/php5-fpm.sock
listen.owner = www-data
listen.group = www-data
listen.mode = 0666
pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
chdir = /
security.limit_extensions = .php .php3 .php4 .php5
php_flag[display_errors] = on
php_admin_value[error_log] = /var/log/fpm-php.www.log
php_admin_flag[log_errors] = on

/etc/nginx/nginx.conf

user  nginx;
worker_processes  1;
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
events {
    worker_connections  1024;
}
http {
    include       /etc/nginx/mime.types;
    server_tokens off;
    default_type  application/octet-stream;
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    access_log  /var/log/nginx/access.log  main;
    sendfile        on;
    #tcp_nopush     on;
    keepalive_timeout  65;
    #gzip  on;
    include /etc/nginx/sites-enabled/*;
}

/etc/nginx/sites-enabled/wordpress

server {
    listen   80;
    server_name www.domain.com;
    root /home/messi/web/wordpress;
    error_log /var/log/nginx/err.wordpress.log;
    index index.php;
    location / {
        try_files $uri $uri/ /index.php?$args;
    }
    location = /favicon.ico {
        log_not_found off;
        access_log off;
    }
    location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
    }
    location ~ /\. {
        deny all;
    }
    location ~* /(?:uploads|files)/.*\.php$ {
        deny all;
    }
    location ~ \.php$ {
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
            fastcgi_pass unix:/var/run/php5-fpm.sock;
            fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            include /etc/nginx/fastcgi_params;
    }
}

Setup user permission:

#adduser www-data messi
#chown -R www-data:www-data /home/messi/web
#chmod -R 664 /home/messi/web/wordpress

How can I resolve this? Thanks

user3145965
  • 221
  • 1
  • 2
  • 4

4 Answers4

42

SELinux will cause this error on CentOS/RHEL 7+ by default :(

To test if SELinux is the source of your woes, do

setenforce 0

... and see if everything works. If that fixed it, you can leave SELinux off (weak, you're better than that), or you can turn it back on with

setenforce 1

... and then properly fix the issue.

If you do

tail -f /var/log/audit/audit.log

... you'll see the SELinux issue. In my case, it was denying PHP-FPM access to web files. You can run the following directives to fix it:

setsebool -P httpd_can_network_connect_db 1
setsebool -P httpd_can_network_connect 1

This actually didn't fix it for me at first, but then restoring SELinux context did it

restorecon -R -v /var/www

Hope that helps.

siliconrockstar
  • 3,554
  • 36
  • 33
  • 1
    This was what helped me after an hour of struggling! thanks! – Ali Hashemi Jan 23 '16 at 14:21
  • 1
    Worked perfectly on CentOS 7 – almyz125 Mar 04 '16 at 02:28
  • 1
    The same also happens on CentOS 6, and the restorecon command worked. – Dale C. Anderson Jul 13 '16 at 00:27
  • 1
    I can also verify that restorecon also worked for me, without any of the setsebool commands. PS Thanks, I spent hours trying to figure out what the source of my 403 errors were. – jyoung Sep 26 '16 at 02:57
  • 1
    Leads to several articles for why you would, or wouldn't want SELinux http://serverfault.com/questions/97898/why-we-need-selinux – PJ Brunet Nov 23 '16 at 10:19
  • Well of course it does, you just turned SELinux off and considerably weakened the security of your system. Now if someone hijacks the nginx process they have greater power in other parts of the system. – siliconrockstar Feb 16 '17 at 04:58
8

This is likely a permissions problem.

  1. Make sure that every parent directory has +x permissions for the user (the nginx user and/or php-fpm user).

    You can check these permissions with: namei -om /path/to/file.

  2. If you have symlinks, make sure they point to a valid path.

  3. Make sure chroots have access to the right paths.

  4. Make sure SELinux (e.g. Fedora / Centos) or AppArmor (e.g. Ubuntu) or any other MAC security systems are not interfering with the file access.

    For SeLinux: Check /var/log/audit/audit.log or /var/log/messages

    For AppArmor: Im not a Ubuntu user and as far as I understand the logging for AppArmor isnt always easy to figure out. You might check here for info: http://ubuntuforums.org/showthread.php?t=1733231

ethanpil
  • 2,522
  • 2
  • 24
  • 34
  • 1
    For me problem occured when switching from Ubuntu 12.04 to 14.04. Looks like in 12.04 user folders inside `/home` by default would have `755` (`drwxr-xr-x`) permissions. Whereas in 14.04, they would have `700` permissions (`drwx------`). And because web-server runs under `www-data` user, keeping wordpress directory under `/home/user1404/wordpress/index.php` would be inaccessible for `www-data` user, because `user1404` has `700` permissions. – Dimitry K Dec 13 '14 at 18:09
  • If you want to disable it for CentOS 7, edit `/etc/selinux/config` set `"SELINUX=disabled"` then reboot. – PJ Brunet Nov 23 '16 at 10:00
3

It was SELinux in my case as well. I read some documentation found here:

https://wiki.centos.org/HowTos/SELinux
https://linux.die.net/man/1/chcon

and ended up with the command:

chcon -R -v --type=httpd_sys_content_t html/

....this changed the context of the files to the httpd type which is what my web server (Nginx) was running as.

You can find what context your web server runs as using:

ps axZ | grep nginx

....which in my case gave me:

system_u:system_r:**httpd_t**:s0      6246 ?        Ss     0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
system_u:system_r:**httpd_t**:s0      6249 ?        S      0:00 nginx: worker process

Seeing the context of the running service was httpd_t I changed the context of my web site's root folder to that (recursively)

The point of SELinux is to only allow services and processes to access files of the same type as them. Since the web server ran as httpd_t than it made sense to set the context of the files/folder in the site to the same.

I'm new at this by the way.... But this seemed to be the best approach to me. It kept SELinux enabled, didn't lessen the security of what it does, nad matched up context of the files with the process/service.

Pavel Smirnov
  • 4,611
  • 3
  • 18
  • 28
Zack A
  • 688
  • 5
  • 11
-3
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; -> 
fastcgi_param SCRIPT_FILENAME/home/messi/web/wordpress$fastcgi_script_name;
Gerard de Visser
  • 7,590
  • 9
  • 50
  • 58
  • 7
    This answer is in the Low Quality Posts review queue because it's just code with no explanation. Please improve your answer by explaining what your code does and how it answers the question. Please read [this advice on answering programming questions helpfully](http://msmvps.com/blogs/jon_skeet/archive/2009/02/17/answering-technical-questions-helpfully.aspx). – Adi Inbar Oct 03 '14 at 15:51