228

I have nginx installed with PHP-FPM on a CentOS 5 box, but am struggling to get it to serve any of my files - whether PHP or not.

Nginx is running as www-data:www-data, and the default "Welcome to nginx on EPEL" site (owned by root:root with 644 permissions) loads fine.

The nginx configuration file has an include directive for /etc/nginx/sites-enabled/*.conf, and I have a configuration file example.com.conf, thus:

server {
 listen 80;

 Virtual Host Name
 server_name www.example.com example.com;


 location / {
   root /home/demo/sites/example.com/public_html;
   index index.php index.htm index.html;
 }

 location ~ \.php$ {
  fastcgi_pass   127.0.0.1:9000;
  fastcgi_index  index.php;
  fastcgi_param  PATH_INFO $fastcgi_script_name;
  fastcgi_param  SCRIPT_FILENAME  /home/demo/sites/example.com/public_html$fastcgi_script_name;
  include        fastcgi_params;
 }
}

Despite public_html being owned by www-data:www-data with 2777 file permissions, this site fails to serve any content -

 [error] 4167#0: *4 open() "/home/demo/sites/example.com/public_html/index.html" failed (13: Permission denied), client: XX.XXX.XXX.XX, server: www.example.com, request: "GET /index.html HTTP/1.1", host: "www.example.com"

I've found numerous other posts with users getting 403s from nginx, but most that I have seen involve either more complex setups with Ruby/Passenger (which in the past I've actually succeeded with) or are only receiving errors when the upstream PHP-FPM is involved, so they seem to be of little help.

Have I done something silly here?

Angus Ireland
  • 2,385
  • 3
  • 22
  • 21
  • check this answer https://stackoverflow.com/questions/16808813/nginx-serve-static-file-and-got-403-forbidden/46083622#46083622 – sandes Jun 17 '18 at 04:46

11 Answers11

376

One permission requirement that is often overlooked is a user needs x permissions in every parent directory of a file to access that file. Check the permissions on /, /home, /home/demo, etc. for www-data x access. My guess is that /home is probably 770 and www-data can't chdir through it to get to any subdir. If it is, try chmod o+x /home (or whatever dir is denying the request).

EDIT: To easily display all the permissions on a path, you can use namei -om /path/to/check

kolbyjack
  • 17,660
  • 5
  • 48
  • 35
  • 7
    Same here. On my install of CentOS 6, /home/user dirs are set to 700 by default. – jjt Apr 13 '12 at 18:49
  • 2
    This guy talks about it too: (`chmod -4 +x /mypath` worked for me) http://nginxlibrary.com/403-forbidden-error/ – Peter Ehrlich Dec 29 '12 at 02:48
  • 1
    Can someone explain why this behavior is different than apache which does NOT require every parent directory to have "x" permissions?!? – JoshuaDavid May 23 '14 at 06:03
  • 4
    It isn't any different. The only reason apache wouldn't also require x permission on parent directories is if it's running as root. – kolbyjack May 23 '14 at 12:46
  • I ended up adding the www-data user to my personal user group and doing a chmod 710 to my root user folder. Worked like a charm. (On a debian based distro) – basicdays Jul 10 '14 at 20:49
  • both the permission requirement and the namei tip helped me solve my issue. – Manatax Aug 04 '15 at 05:31
  • I had to add +x to a parent folder of www/ even though all the sub-folders of www/ had the x-bit set. Wow, this took a while to figure out! – kashiraja Feb 21 '19 at 07:19
  • I would like to stress this point: the user needs permissions in EVERY PARENT DIRECTORY! – tonysepia Dec 04 '19 at 19:26
  • `autoindex on` might help – Yossarian42 Mar 16 '20 at 21:46
  • I have the right permissions for folder /var/www/html but I get a permission error. my question is after installing nginx we have a user and group nginx. we should use this user for folder owner? – ali Falahati Jul 03 '20 at 12:06
332

If you still see permission denied after verifying the permissions of the parent folders, it may be SELinux restricting access.

To check if SELinux is running:

# getenforce

To disable SELinux until next reboot:

# setenforce Permissive

Restart Nginx and see if the problem persists. To allow nginx to serve your www directory (make sure you turn SELinux back on before testing this. i.e, setenforce Enforcing)

# chcon -Rt httpd_sys_content_t /path/to/www

See my answer here for more details

Community
  • 1
  • 1
Kurt
  • 7,102
  • 2
  • 19
  • 16
  • 1
    I couldn't figure out why whenever I started nginx it said ``open() "/usr/share/nginx/logs/xxxxxx.com-error_log" failed (13: Permission denied)`` after I checked the permissions and made sure it was being started as root. I came across this and found out SELinux was enabled. I disabled it and now it works no problem. Thanks! – SameOldNick Oct 29 '14 at 05:51
  • 1
    Thanks! I still had an issue with permission denied for users owning their own FPM sockets so I was able to fix that one by changing the `user` from **nginx** to **root** in `/var/nginx/nginx.conf` - perhaps that will help someone else who comes across this issue. S/O to [DataPsyche](http://datapsyche.wordpress.com/2014/07/30/nginx-404-page-not-found-error-due-to-failed-13-permission-denied/) for the second part. – Winter Nov 03 '14 at 19:54
  • i think centos 6.6 has a bug. Selinux breaks nginx. – ILker Özcan Nov 14 '14 at 15:27
  • 14
    This is default behavior on CentOS 7 aswell. – timss Feb 14 '15 at 23:19
  • 1
    Thank you thank you. 95% Of the info on this are about permissions, but when they are all set 775, I really was pulling my hear out (all 6 of them). – eddy147 Sep 25 '15 at 20:19
  • 4
    Im with everybody else that commented. I was ready to throw my computer out the window. Nginx was configured properly, permissions where properly set, I even went as far to make everything 777 and still got permissions denied error. – DOfficial Nov 23 '15 at 03:25
  • 2
    The better SELinux command for this is: *semanage fcontext -a -t httpd_sys_rw_content_t "/path/to/www(/.*)?"* and *restorecon -v /path/to/www* this will automatically give all your files in this path the correct SELinux rights. Also when new files are added. Use httpd_sys_content_t if you only need reading rights. – Kapitein Witbaard Apr 14 '16 at 13:16
  • 2
    On Centos 7 (SELinux enabled), the simplest fix for me was `setsebool httpd_read_user_content on` (For static files hosted from a home directory, chmod'ed to world-readable) - Though I guess @KapiteinWitbaard's method above is more secure. – TimStaley Jun 21 '16 at 15:44
  • Thank you, i was looking all night for your solution, even set chown to nginx and 777 to all files in the directory but no success! You are great! – Florin Andrei Jan 08 '17 at 07:23
  • Seems that there's no way this works in RHEL 6.x (and probably centos 6). There needs some additional steps in configuring selinux and perhaps even modifying things about Red Hat itself. Give up on any attempt to try to get it working on RHEL 6, I've been looking all over. – Dexter Apr 12 '17 at 22:53
  • Dude this answer was super useful for Fedora. Thank you! – wilk3ns Dec 19 '22 at 19:40
  • Thanks! Work for me on Centos 7 – Muhaimin Aiman Mar 29 '23 at 03:40
83

I solved this problem by adding user settings.

in nginx.conf

worker_processes 4;
user username;

change the 'username' with linux user name.

sshow
  • 8,820
  • 4
  • 51
  • 82
Anderson
  • 3,139
  • 3
  • 33
  • 45
  • 6
    I believe this answer is better security wise than the accepted answer. You don't have to go messing around with the permissions on your home folder (which could contain sensitive information) and if you're doing development with nginx, it saves you from having to upload weird file permissions to SCM. – CamelBlues Jan 08 '15 at 20:40
  • The added permissions on the home directory are execute, not read, thus no sensitive information is (in theory) revealed (except, in this case, perhaps to a malicious PHP script which recurses upwards and knows the location of the sensitive files within another directory accessible to www-data). You'll also notice that in the original question, my nginx was running as "www-data" - the configuration values here were already set as desired. – Angus Ireland Jan 16 '15 at 00:15
  • 3
    Had to add usergroup as well: user usegroup. – Gabriel Apr 19 '15 at 21:05
  • Worked for me as well (just as chmodding the dir to nginx:nginx). I prefer this solution though so I can have my document root owned by another user than nginx. Thanks Anderson for pointing this out. – kvdv Jul 08 '15 at 19:12
  • saved my day. by the way what if the machine has multiple users and each user has his own website, how do i deal with it? – psychok7 Sep 10 '16 at 14:20
  • 1
    I believe this is the best solution – smilingky Jan 14 '21 at 16:32
53

I've got this error and I finally solved it with the command below.

restorecon -r /var/www/html

The issue is caused when you mv something from one place to another. It preserves the selinux context of the original when you move it, so if you untar something in /home or /tmp it gets given an selinux context that matches its location. Now you mv that to /var/www/html and it takes the context saying it belongs in /tmp or /home with it and httpd is not allowed by policy to access those files.

If you cp the files instead of mv them, the selinux context gets assigned according to the location you're copying to, not where it's coming from. Running restorecon puts the context back to its default and fixes it too.

jsina
  • 4,433
  • 1
  • 30
  • 28
24

I've tried different cases and only when owner was set to nginx (chown -R nginx:nginx "/var/www/myfolder") - it started to work as expected.

Andron
  • 6,413
  • 4
  • 43
  • 56
  • 1
    Worked for me as well. I suspect this happens because even though nginx is started as root, it spawns processes under the user that is specified in the nginx.conf file, which is "user nginx;" by default. Changing the user to the user who owns your document root should also work as Anderson suggested. – kvdv Jul 08 '15 at 19:08
  • Mr. Anderson? No! Andron ;) – Andron Jul 10 '15 at 10:05
  • Apologies Mr. Andron ;) I can't seem to edit the previous comment anymore though... – kvdv Jul 14 '15 at 12:40
  • Sure, not a problem. Now I was as Anderson :) and need to write some fairy tales... – Andron Jul 15 '15 at 12:14
  • 1
    Isn't this a security issue ? – gontard Dec 02 '16 at 10:52
11

If you're using SELinux, just type:

sudo chcon -v -R --type=httpd_sys_content_t /path/to/www/

This will fix permission issue.

David Ding
  • 1,473
  • 1
  • 15
  • 13
1

Old question, but I had the same issue. I tried every answer above, nothing worked. What fixed it for me though was removing the domain, and adding it again. I'm using Plesk, and I installed Nginx AFTER the domain was already there.

Did a local backup to /var/www/backups first though. So I could easily copy back the files.

Strange problem....

David
  • 2,094
  • 3
  • 30
  • 47
1

We had the same issue, using Plesk Onyx 17. Instead of messing up with rights etc., solution was to add nginx user into psacln group, in which all the other domain owners (users) were:

usermod -aG psacln nginx

Now nginx has rights to access .htaccess or any other file necessary to properly show the content.

On the other hand, also make sure that Apache is in psaserv group, to serve static content:

usermod -aG psaserv apache

And don't forget to restart both Apache and Nginx in Plesk after! (and reload pages with Ctrl-F5)

1

I was facing the same issue but above solutions did not help.

So, after lot of struggle I found out that sestatus was set to enforce which blocks all the ports and by setting it to permissive all the issues were resolved.

sudo setenforce 0

Hope this helps someone like me.

Harshal Yeole
  • 4,812
  • 1
  • 21
  • 43
  • 1
    While that might have fixed your problem - congrats! - that's a bit sad :-( See [stopdisablingselinux.com](https://stopdisablingselinux.com) - could you find a different workaround? – Angus Ireland Mar 06 '19 at 17:07
0

I dug myself into a slight variant on this problem by mistakenly running the setfacl command. I ran:

sudo setfacl -m user:nginx:r /home/foo/bar

I abandoned this route in favor of adding nginx to the foo group, but that custom ACL was foiling nginx's attempts to access the file. I cleared it by running:

sudo setfacl -b /home/foo/bar

And then nginx was able to access the files.

danvk
  • 15,863
  • 5
  • 72
  • 116
0

If you are using PHP, make sure the index NGINX directive in the server block contains a index.php:

index index.php index.html;

For more info checkout the index directive in the official documentation.