21

I'm having problem using S3FS. I'm using

ubuntu@ip-x-x-x-x:~$ /usr/bin/s3fs --version
Amazon Simple Storage Service File System 1.71

And I have the password file installed in the /usr/share/myapp/s3fs-password with 600 permission.

I have succeeded mounting the S3 bucket.

sudo /usr/bin/s3fs -o allow_other -opasswd_file=/usr/share/myapp/s3fs-password -ouse_cache=/tmp mybucket.example.com /bucket

And I have user_allow_other enabled in the /etc/fuse.conf

When I tried creating a file in the bucket as root it worked.

ubuntu@ip-x-x-x-x:~$ sudo su
root@ip-x-x-x-x:/home/ubuntu# cd /bucket
root@ip-x-x-x-x:/bucket# echo 'Hello World!' > test-`date +%s`.txt
root@ip-x-x-x-x:/bucket# ls
test-1373359118.txt

I checked the bucket mybucket.example.com's content and the file was successfully created.

But I was having difficulties writing into the directory /bucket as different user.

root@ip-x-x-x-x:/bucket# exit
ubuntu@ip-x-x-x-x:~$ cd /bucket
ubuntu@ip-x-x-x-x:/bucket$ echo 'Hello World!' > test-`date +%s`.txt
-bash: test-1373359543.txt: Permission denied

I desperately tried chmod-ing to 777 the test-1373359118.txt. And I can write into the file

ubuntu@ip-x-x-x-x:/bucket$ sudo chmod 777 test-1373359118.txt
ubuntu@ip-x-x-x-x:/bucket$ echo 'Test' > test-1373359118.txt
ubuntu@ip-x-x-x-x:/bucket$ cat test-1373359118.txt
Test

Funnily, I could create a directory inside the bucket, set the chmod to 777, and write a file there.

ubuntu@ip-x-x-x-x:/bucket$ sudo mkdir -m 1777 test
ubuntu@ip-x-x-x-x:/bucket$ ls
test  test-1373359118.txt
ubuntu@ip-x-x-x-x:/bucket$ cd test
ubuntu@ip-x-x-x-x:/bucket/test$ echo 'Hello World!' > test-`date +%s`.txt
ubuntu@ip-x-x-x-x:/bucket/test$ ls
test-1373360059.txt
ubuntu@ip-x-x-x-x:/bucket/test$ cat test-1373360059.txt
Hello World

But then I tried

ubuntu@ip-x-x-x-x:~$ sudo chmod 777 /mybucket
chmod: changing permissions of '/mybucket': Input/output error

It didn't work.

Initially I was thinking to use this /bucket directory to store large and rarely accessed files from my LAMP stacks located several EC2 machines. (I think it's suitable enough to use this without making a special handling library using AWS PHP SDK, but that's not the point.)

Because of that reason, I can settle using a directory inside the /mybucket to store the files. But I'm just curious if there is a way to allow entire /mybucket to other users?

Petra Barus
  • 3,815
  • 8
  • 48
  • 87

7 Answers7

31

Permission was an issue with older versions of S3FS. Upgrade to latest version to get it working.

As already stated in the question itself and other answers, While mounting you will have to pass the following parameters: -o allow_other

Example:

s3fs mybucket:/ mymountlocation/ -o allow_other 

Also, before doing this ensure the following is enabled in /etc/fuse.conf:

user_allow_other

It is disabled by default ;)

codersofthedark
  • 9,183
  • 8
  • 45
  • 70
  • 6
    This doesn't seem to work recursively, meaning I cannot access the subdirectories in the S3 bucket even though the `allow_other` option is set (as well as `user_allow_other`) – mj3c Mar 02 '20 at 08:57
  • @mj3c it seems the answer is umask. see https://stackoverflow.com/a/62693432/1421036 – LogicDaemon May 25 '22 at 18:55
7

This works for me:

s3fs ec2downloads:/ /mnt/s3 -o use_rrs -o allow_other -o use_cache=/tmp

It must have been fixed in a recent version, I'm using the latest clone (1.78) from the github project.

Chris
  • 754
  • 9
  • 16
4

This is the only thing that worked for me:

You can pass the uid option to make sure it does:

    -o umask=0007,uid=1001,gid=1001 # replace 1001 with your ids

from: https://github.com/s3fs-fuse/s3fs-fuse/issues/673

To find your uid and gid, see the first two number from here:

sudo cat /etc/passwd | grep $USER
user48956
  • 14,850
  • 19
  • 93
  • 154
1

I would like to recommend to take a look at the new project RioFS (Userspace S3 filesystem): https://github.com/skoobe/riofs.

This project is “s3fs” alternative, the main advantages comparing to “s3fs” are: simplicity, the speed of operations and bugs-free code. Currently the project is in the “testing” state, but it's been running on several high-loaded fileservers for quite some time.

We are seeking for more people to join our project and help with the testing. From our side we offer quick bugs fix and will listen to your requests to add new features.

Regarding your issue, in order to run RioFS as a root user and allow other users to have r/w access rights to the mounted directory:

  1. make sure /etc/fuse.conf contains user_allow_other option
  2. launch RioFS with -o "allow_other" parameter.

The whole command line to launch RioFS will look like:

sudo riofs -c /path/to/riofs.conf.xml http://s3.amazonaws.com mybucket.example.com /bucket

(make sure you exported both AWSACCESSKEYID and AWSSECRETACCESSKEY variables or set them in riofs.conf.xml configuration file).

Hope it helps you and we are looking forward to seeing you joined our community !

Paul
  • 1,262
  • 8
  • 16
1

There could be several reasons, and I'm listing a possible reason as I encountered the same issue. If you look at your file permission, it could have inherited '---------' - no permissions/ACL.

If that's the case, you could add the "x-amz-meta-mode" to the meta data of the file. Do check out my post on how to do it/do it dynamically.

Community
  • 1
  • 1
George
  • 6,006
  • 6
  • 48
  • 68
0

if you are using centos you need to enable httpd_use_fusefs option otherwise no matter what you give for s3fs option it will never have the permission to access via httpd

setsebool -P httpd_use_fusefs on
0

for all users to access the mounted bucket use the umask=0002 in the /etc/fstab file and remount the s3 bucket

example fuse.s3fs _netdev,allow_other,umask=0002,passwd_file=/etc/passwdfile.txt