14

I'm getting an Input/Output error when I try and create a directory or file in a google cloud storage bucket mounted on a linux (Ubuntu 15.10) directory.

Steps I have done:

  • Created a user named transfer
  • Created a /mnt/backups directory and ran chown -R transfer /mnt/backups
  • As the user transfer, ran gcsfuse --implicit-dir backup01-bucket /mnt/backups. The file system mounts successfully
  • Run mkdir test and get the error mkdir: cannot create directory test: Input/output error

Is there something I missed? What I'm trying to do is be able to ftp files to the server and store them in the google storeage bucket rather than local storage.

Update I modified the command to get some debug information:

gcsfuse --implicit-dirs --foreground --debug_gcs --debug_fuse backup01-bucket /mnt/backups

Then ran mkdir /mnt/backups/test as the transfer user.

The following bedug information came out:

fuse_debug: Op 0x00000060        connection.go:395] <- GetInodeAttributes (inode 1)
fuse_debug: Op 0x00000060        connection.go:474] -> OK
fuse_debug: Op 0x00000061        connection.go:395] <- LookUpInode (parent 1, name "test")
gcs: Req             0x3a: <- StatObject("test/")
gcs: Req             0x3b: <- ListObjects()
gcs: Req             0x3c: <- StatObject("test")
gcs: Req             0x3c: -> StatObject("test") (53.375107ms): gcs.NotFoundError: googleapi: Error 404: Not Found, notFound
gcs: Req             0x3b: -> ListObjects() (59.061271ms): OK
gcs: Req             0x3a: -> StatObject("test/") (71.666112ms): gcs.NotFoundError: googleapi: Error 404: Not Found, notFound
fuse_debug: Op 0x00000061        connection.go:476] -> Error: "no such file or directory"
fuse_debug: Op 0x00000062        connection.go:395] <- MkDir
gcs: Req             0x3d: <- CreateObject("test/")
gcs: Req             0x3d: -> CreateObject("test/") (22.090155ms): googleapi: Error 403: Insufficient Permission, insufficientPermissions
fuse_debug: Op 0x00000062        connection.go:476] -> Error: "CreateChildDir: googleapi: Error 403: Insufficient Permission, insufficientPermissions"
fuse: 2016/04/04 06:51:02.922866 *fuseops.MkDirOp error: CreateChildDir: googleapi: Error 403: Insufficient Permission, insufficientPermissions
2016/04/04 06:51:08.378100 Starting a garbage collection run.
gcs: Req             0x3e: <- ListObjects()
gcs: Req             0x3e: -> ListObjects() (54.901164ms): OK
2016/04/04 06:51:08.433405 Garbage collection succeeded after deleted 0 objects in 55.248203ms.

Note: If I create a directory in the web console I can see the directory fine.

user1476207
  • 377
  • 2
  • 8
  • 14
  • Could you run gcsfuse with `--foreground` and amend your question with the logging output? If there's nothing useful, also try `--debug_gcs` and/or `--debug_fuse`. – jacobsa Apr 03 '16 at 23:11
  • I have updated the question with debug information. Thanks. – user1476207 Apr 04 '16 at 07:02

8 Answers8

16

It appears from the Insufficient Permission errors in your debug output that gcsfuse doesn't have sufficient permissions to your bucket. Probably it has read-only access.

Be sure to read the credentials documentation for gcsfuse. In particular, if you're using a service account on a GCE VM make sure to set up the VM with the storage-full access scope.

jacobsa
  • 5,719
  • 1
  • 28
  • 60
  • Thanks! Unfortunately I had to recreate the VM to enable the API. Could not find a way to change the storage to read-write on an existing VM. – user1476207 Apr 08 '16 at 19:36
  • Yeah, as far as I'm aware it's not possible to change. – jacobsa Apr 09 '16 at 10:10
  • 5
    Good news, as of December 17, 2016 it's possible to change the permissions on a *stopped* VM. See https://googlecloudplatform.uservoice.com/forums/302595-compute-engine/suggestions/13101552-ability-to-change-cloud-api-access-scopes-on-launc. You can also change permissions per API, for example continue denying access to BigQuery but full for Storage. – Spotlight Jul 15 '17 at 07:20
  • 2
    I just successfully changed the storage API access on a stopped instance to fix this problem. Please note that read/write was NOT sufficient. I had to grant "Full" storage access to write over gcs fuse. – jorfus Dec 18 '17 at 21:35
  • 1
    thank you, i found this working GOOGLE_APPLICATION_CREDENTIALS=/root/mykey.json gcsfuse... – James Tan May 11 '18 at 15:11
15

You problem does stem from insufficient permissions, but you do not need to destroy and re-create the VM with a different scope to solve this problem. Here is another approach that is more suitable for production systems:

  1. Create a service account
  2. Create a key for the service account, and download the JSON file
  3. Grant an appropriate role to the service account
  4. Grant the appropriate permissions to the service account on the bucket
  5. Upload the JSON credentials for the service account to the VM

Finally, define an environment variable that contains the path to the service account credentials when calling gcsfuse from the command line:

GOOGLE_APPLICATION_CREDENTIALS=/root/credentials/service_credential_file.json gcsfuse bucket_name /my/mount/point

Use the key_file option to accomplish the same thing in fstab. Both of these options are documented in the gcsfuse credentials documentation. (EDIT: this option is documented, but won't work for me.)

Interestingly, you need to use the environment variable or key_file option even if you have configured the service account on the VM using:

gcloud auth activate-service-account --key-file /root/credentials/service_credential_file.json

For some reason, gcsfuse ignores the active credentialed account.

Using the storage-full scope when creating a VM has security and stability implications, because it allows that VM to have full access to every bucket that belongs to the same project. Should your file storage server really be able to over-write the logs in a logging bucket, or read the database backups in another bucket?

Craig Finch
  • 978
  • 7
  • 21
  • using --key-file argument was not working for me. File sytem was getting mounted but that operation was stucked. First thing worked like charm that GOOGLE_APPLICATION_CREDENTIALS=.. – Prasad Jan 25 '20 at 12:40
1

This problem due to missing of credential file.

go to https://cloud.google.com/docs/authentication/production

Creating a service account

  • You will get a json file after creation account.
  • upload your json on VM instance.
  • Enter following in /etc/fstab.

    {{gcp bucket name}} {{mount path}} gcsfuse rw,noauto,user,key_file={{/path/to/key.json}}

    if you have already mounted unmount first.

  • $ mount -a

Follow this link

https://github.com/GoogleCloudPlatform/gcsfuse/blob/master/docs/mounting.md#credentials

0

This problem can also occur in case you have set some retention policy/rules in that bucket. Like for me, I was also getting the same input/output error when I was trying to update any file within the mounted folder, the root cause was that I had added retention policy for not deleting any file before 1 month.

Mukesh Rajput
  • 745
  • 6
  • 11
0

I was facing this issue intermittently, so figured I'd share what I found:

I'm using minikube for development and GCP for production.

I have the following postStart lifecycle hook:

lifecycle:
  postStart:
    exec:
      command: ['gcsfuse', '-o', 'allow_other', 'bucket', 'path']

Locally, I configured the permissions by running these two commands before creating the pod:

$ gcloud auth login
$ minikube addons enable gcp-auth

Remotely, when creating my cluster, I enabled the permissions like so:

gcloud_create_cluster:
    gcloud container clusters create cluster \
    --scopes=...storage-full...

While I was develoing, I found myself updating/overriding files wtihin 1 minute of each. Since my retention policy was set to 60 seconds, any modifications or deletions were disallowed in that time. The solution was to simply reduce it.

enter image description here

This is not an end-all solution but hopefully someone else finds it useful.

Olshansky
  • 5,904
  • 8
  • 32
  • 47
0

Please check the Cloud API access scopes setting of the virtual machine, it needs to be configured to Allow full access to all Cloud APIs

enter image description here

Suraj Rao
  • 29,388
  • 11
  • 94
  • 103
0

It worked for with below entry in fstab:

bucketName mountPath gcsfuse rw,allow_other,uid=1003,gid=1003,file_mode=777,dir_mode=777,implicit_dirs

Note: Do not add "gs://" to bucket name.

ouflak
  • 2,458
  • 10
  • 44
  • 49
0

If the storage bucket is accessing by a service account, please provide enough permissions to that service account. storage Admin worked for me.