16

I have tried to access files in a bucket and I keep getting access denied on the files. I can see them in the GCS console but can access them through that and cannot access them through gsutil either running the command below.

gsutil cp gs://my-bucket/folder-a/folder-b/mypdf.pdf files/

But all this returns is AccessDeniedException: 403 Forbidden

I can list all the files and such but not actually access them. I've tried adding my user to the acl but that still had no effect. All the files were uploaded from a VM through a fuse mount which worked perfectly and just lost all access.

I've checked these posts but none seem to have a solution thats helped me

Can't access resource as OWNER despite the fact I'm the owner

gsutil copy returning "AccessDeniedException: 403 Insufficient Permission" from GCE

gsutil cors set command returns 403 AccessDeniedException

MobliMic
  • 303
  • 1
  • 2
  • 12
  • 1
    what do you get with "gsutil ls gs://my-bucket/folder-a/folder-b/mypdf.pdf" and "gsutil ls gs://my-bucket/folder-a/folder-b/"? – Riccardo Dec 31 '14 at 13:33

3 Answers3

7

Although, quite an old question. But I had a similar issue recently. After trying many options suggested here without success, I carefully re-examined my script and discovered I was getting the error as a result of a mistake in my bucket address gs://my-bucket. I fixed it and it worked perfectly!

ahajib
  • 12,838
  • 29
  • 79
  • 120
6

This is quite possible. Owning a bucket grants FULL_CONTROL permission to that bucket, which includes the ability to list objects within that bucket. However, bucket permissions do not automatically imply any sort of object permissions, which means that if some other account is uploading objects and sets ACLs to be something like "private," the owner of the bucket won't have access to it (although the bucket owner can delete the object, even if they can't read it, as deleting objects is a bucket permission).

I'm not familiar with the default FUSE settings, but if I had to guess, you're using your project's system account to upload the objects, and they're set to private. That's fine. The easiest way to test that would be to run gsutil from a GCE host, where the default credentials will be the system account. If that works, you could use gsutil to switch the ACLs to something more permissive, like "project-private."

The command to do that would be:

gsutil acl set -R project-private gs://muBucketName/
Brandon Yarbrough
  • 37,021
  • 23
  • 116
  • 145
  • This command also gives a `AccessDeniedException: 403 AccessDenied`. Do you have any other suggestions? I'm having the same problem and all my objects are set to private. @BrandonYarbrough – bryan Mar 05 '15 at 20:10
  • Which account are you using to invoke gsutil? Prseumably it either does not own the bucket, or it does not own some object in the bucket. – Brandon Yarbrough Mar 05 '15 at 20:16
  • All my objects are using `private`. I have my compute engine listed as an owner in my bucket. And I am using the default gsutil command that came with my compute engine. I can list everything inside the bucket, I just can't copy anything. – bryan Mar 05 '15 at 20:45
  • Is there anyway around this or do I need to re-make these objects if they are on private? – bryan Mar 05 '15 at 20:55
  • If your compute engine service account is the owner, you should be fine. The default gsutil setup on compute engine runs as that same service account. Without seeing the full output, I'm guessing there's an object in that bucket that is owned by someone else, perhaps your personal account? Try using the "-m" flag, which will speed things up but will also have it keep going after encountering any bad objects. – Brandon Yarbrough Mar 05 '15 at 22:04
  • I doubt my compute engine service account is the owner. I am assuming my app engine is the owner (which is what I used to create the files). – bryan Mar 05 '15 at 22:09
0

tl;dr The Owner (basic) role has only a subset of the GCS permissions present in the Storage Admin (predefined) role—notably, Owners cannot access bucket metadata, list/read objects, etc. You would need to grant the Storage Admin (or another, less privileged) role to provide the needed permissions.


NOTE: This explanation applies to GCS buckets using uniform bucket-level access.

In my case, I had enabled uniform bucket-level access on an existing bucket, and found I could no longer list objects, despite being an Owner of its GCP project.

This seemed to contradict how GCP IAM permissions are inherited— organization → folder → project → resource / GCS bucket—since I expected to have Owner access at the bucket level as well.

But as it turns out, the Owner permissions were being inherited as expected, rather, they were insufficient for listing GCS objects.

The Storage Admin role has the following permissions which are not present in the Owner role: [1]

  • storage.buckets.get
  • storage.buckets.getIamPolicy
  • storage.buckets.setIamPolicy
  • storage.buckets.update
  • storage.multipartUploads.abort
  • storage.multipartUploads.create
  • storage.multipartUploads.list
  • storage.multipartUploads.listParts
  • storage.objects.create
  • storage.objects.delete
  • storage.objects.get
  • storage.objects.getIamPolicy
  • storage.objects.list
  • storage.objects.setIamPolicy
  • storage.objects.update

This explained the seemingly strange behavior. And indeed, after granting the Storage Admin role (whereby my user was both Owner and Storage Admin), I was able to access the GCS bucket.

Footnotes

  1. Though the documentation page Understanding roles omits the list of permissions for Owner (and other basic roles), it's possible to see this information in the GCP console:

    • Go to "IAM & Admin"
    • Go to "Roles"
    • Filter for "Owner"
    • Go to "Owner"
    • (See list of permissions)
mxxk
  • 9,514
  • 5
  • 38
  • 46