9

I recently moved over a thousand Paperlclip attachments from local storage to Amazon S3 using the s3cmd tool. You can find details on how I accomplished this here, but to summarize, I used the following command to migrate all the old attachments.

s3cmd sync my-app/public/system/ s3://mybucket 

I updated my codebase to make use of the new S3 bucket, I've tested the connection and everything works fine. In fact, I can upload new attachments to the remote S3 bucket through my application and view/download them no problem. However, it seems somewhere along the lines Paperclip and S3 aren't in sync with one another, all the attachments that I moved over to my s3 bucket (blurred out in the image below) are returning 403s if I try and access them through my application. But new attachments uploaded to the same bucket are loaded just fine.

permission denied for old attachments migrated to s3

I have an IAM group setup with the following configuration:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::mybucket/*"
    }
  ]
}

Judging from the fact that I can upload new attachments to S3 I'm going to say the connection established just fine. I believe I setup s3cmd to connect to my bucket via a different IAM user account than the one accessing the bucket through my application. Could it perhaps be a permission issue? If so, can I change the permissions above in any way to grant access to those files?

I'm using the aws-sdk to integrate paperclip with S3.

Edit: I thought it might have been an ownership issue as I uploaded the files using admin keys rather than the ones I use in my application. I purged the bucket and resynced s3cmd after configuring it to use the same keys that the application was using. I'm met with the same result, however.

Edit2: I can verify my permissions & connection further by going into my production console for my application and interacting with my bucket manually. Everything is working perfectly fine, i.e. I can retrieve files that my browser returns 403s for.

> s3 = AWS::S3.new
=> <AWS::S3>
> bucket = s3.buckets['mybucket']
=> #<AWS::S3::Bucket:mybucket>  
> bucket.exists?
=> true
> image = bucket.objects["SomeFolder/SomeImage.jpg"]
=> <AWS::S3::S3Object:SomeFolder/SomeImage.jpg>
> puts image.read
=> ǃ���^�D��턣�����=m������f ... machine code
Community
  • 1
  • 1
Noz
  • 6,216
  • 3
  • 47
  • 82
  • If I go into the S3 Management console and manually navigate to the images in my backet and grant download permissions to `Everyone` I no longer get the 403s. Looks like it's a permission issue after all, anybody know a quick fix? I was under the impression that my policy above would have covered this, but I guess not. – Noz Jan 06 '14 at 23:34
  • The policy looks good. Is the user in question a member of the group that you attached this policy too? Is the policy attached to the group. What you essentially did was made your bucket public which may not necessarily be a good thing. – Rico Jan 06 '14 at 23:57
  • Yeah, the user is a part of that group. What do you mean by made my bucket public? – Noz Jan 06 '14 at 23:59
  • When you grant permissions to `Everyone` anybody can download/view the content of your bucket regardless whether they use an aws_access_key/aws_secret_access_key pair. – Rico Jan 07 '14 at 00:01
  • I don't mind if the application has complete access to the bucket as it will have it's own authentication layer separate from S3 to control different actions. But I don't want Joe Smchmoe sending some random put request to a URL to change data outside the application. I've only added the permission to `Everyone` for test purposes as it doesn't actually list the IAM User I had setup in the S3 console when I manually try and add permission to a bucket. Also, adding `Everyone` permission to the bucket itself doesn't alleviate the 403s, I have to add them to specific files within the bucket. – Noz Jan 07 '14 at 00:04
  • Yeah the permissions on the buckets are not associated to AIM. To me they are kind of bogus and the only purpose they serve is when you use `Everyone`. By any chance you tried on your policy for actions this: `"Action": ["s3:*"]` instead of `Action": "s3:*"` although the JSON should be correct I've seen in all the examples that they use brackets. – Rico Jan 07 '14 at 00:25
  • @Rico Sadly the brackets didn't fix it. I believe the permissions work fine as I'm able to upload new content and view/download it. For whatever reason those permissions just don't apply to those files I moved to s3 manually via s3cmd sync. – Noz Jan 07 '14 at 17:45
  • Another thing, did you check the right AIM keys are in your .s3cfg for your s3cmd ? You could be uploading the files as another user and somehow changing permissions ? – Rico Jan 07 '14 at 17:51
  • @rico Yeah, I made note of that at the bottom of my question. I uploaded the files as an admin account different from the keys being used in my application. In hindsight that was probably a mistake, but now that it's done is there any way I can transfer ownership? There's quite a few files and I would like to avoid re-uploading them if possible. – Noz Jan 07 '14 at 18:12
  • I would try this: `s3cmd setacl --acl-public --recursive s3://bucket/object` with the original user that you used to create the objects and `s3cmd setacl --acl-private --recursive s3://bucket/object` with the user that you want the buckets to have permissions from. – Rico Jan 07 '14 at 23:49
  • 1
    See my edit. I tried adding a bucket policy, setting the `Principle` element to my IAM user doesn't work. Changing it to an value of `*` (granting access to everyone) resolves my problem though. I'm not sure why it doesn't work when I point it to an authenticated user though. I know the permissions are set correctly 100%, as I'm able to login via the IAM user sign in link and I access to all the operations listed in my policy via the S3 console. Reading through the documentation, it doesn't look like I've missed any steps. It might be buggy. – Noz Jan 08 '14 at 00:03

1 Answers1

14

It looks like your S3 bucket policy is not allowing Public Read access from Public users properly. Try something like:

{
  "Version": "2008-10-17",
  "Statement": [
    {
      "Sid": "AllowPublicRead",
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": [
        "s3:GetObject"
      ],
      "Resource": [
        "arn:aws:s3:::my-brand-new-bucket/*"
      ]
    }
  ]
}

The fact you are able to access these files as a public user when you manually apply the public read permissions confirms your bucket policy is not granting read access correctly.

When you use a public S3 URL to access the files, there is no authenticated user.

Winfield
  • 18,985
  • 3
  • 52
  • 65
  • 1
    Aha! Now that I look closely at those URLS there's no `AccessKey` or `Signature`... doh! [This](https://github.com/thoughtbot/paperclip/wiki/Restricting-Access-to-Objects-Stored-on-Amazon-S3) Github Wiki on the Paperclip repo goes over more detail on how to make an authenticated request. – Noz Jan 13 '14 at 23:20
  • Is this supposed to be "in addition to" or "instead of" the OP policy? Also, I got an error message to the effect of "Principals are not allowed in a a policy" ... or something like that. – Jeff Jan 02 '15 at 22:16
  • The provided policy should suffice. See https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html#example-bucket-policies-use-case-2 – jwadsack Apr 09 '15 at 21:42