73

I've recently inherited a Rails app that uses S3 for storage of assets. I have transferred all assets to my S3 bucket with no issues. However, when I alter the app to point to the new bucket I get 403 Forbidden Status.

My S3 bucket is set up with the following settings:

Permissions

Everyone can list

Bucket Policy

{
 "Version": "2012-10-17",
 "Statement": [
    {
        "Sid": "PublicReadGetObject",
        "Effect": "Allow",
        "Principal": "*",
        "Action": "s3:GetObject",
        "Resource": "arn:aws:s3:::bucketname/*"
    }
 ]
}

CORS Configuration

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
    </CORSRule>
    <CORSRule>
        <AllowedOrigin>https://www.appdomain.com</AllowedOrigin>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>DELETE</AllowedMethod>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

Static Web Hosting

Enabled.

What else can I do to allow the public to reach these assets?

Hartley Brody
  • 8,669
  • 14
  • 37
  • 48
thatgibbyguy
  • 4,033
  • 3
  • 23
  • 39
  • In my scenario, the error is caused by disabled Public Access of the S3 bucket as it's linked to CloudFront. No solution found so far. Possibly, I may require to set-up PreSignedUrl mechanism but less content on it too, docs seem like a college student's exam copy, over-filled with unrelated/useless content to score more marks by writing lengthy answer. – Lalit Fauzdar Sep 12 '22 at 07:37

15 Answers15

51

I know this is an old thread, but I just encountered the same problem. I had everything working for months and it just suddenly stopped working giving me a 403 Forbidden error. It turns out the system clock was the real culprit. I think s3 uses some sort of time-based token that has a very short lifespan. And in my case I just ran:

ntpdate pool.ntp.org

And the problem went away. I'm running CentOS 6 if it's of any relevance. This was the sample output:

19 Aug 20:57:15 ntpdate[63275]: step time server ip_address offset 438.080758 sec

Hope in helps!

Sthe
  • 2,575
  • 2
  • 31
  • 48
  • 1
    Thanks for posting it. I had the same problem just now with Windows. Correcting the time of the system solved it. – Marc Guillot Jan 19 '17 at 11:23
  • 20
    God bless you, good man. And please never hesitate to answer old threads. – Yaroslav Jul 19 '17 at 15:20
  • 1
    This was my issue. Somehow the clock in my docker container had drifted, which isn't an obvious thing to diagnose! – Joe Aug 23 '18 at 11:16
  • 2
    I was using VM snapshots and tearing my hair out. I want to upvote this answer ten times. – batwad Sep 11 '18 at 19:52
  • Correcting the time did the tick. even with 0.1 sec deviation s3 returns 403 forbidden error. – KTYP May 31 '19 at 13:15
  • What an unexpected issue! Without your answer, there's no way I would ever have considered this. It fixed my problem right away - time was off by 1100 seconds! – nicbou Aug 09 '19 at 17:32
  • If you're encountering this issue trying to interface with the S3 API from inside a Docker container, note that ensuring that your host clock is synced may be insufficient. See the bottom of the page here: [Docker for Mac known issues](https://docs.docker.com/docker-for-mac/troubleshoot/#known-issues) – Nathan Jul 23 '20 at 19:24
  • What server must I update, my OS or AWS server? Also what command do I need for it if it is not the above command? – Franco Aug 13 '23 at 19:03
43

It could also be that a proper policy needs to be set according to the AWS docs.

Give the bucket in question this policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicReadGetObject",
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "s3:GetObject"
      ],
      "Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*"
    }
  ]
}
Robin Métral
  • 3,099
  • 3
  • 17
  • 32
Da Rod
  • 701
  • 5
  • 11
  • 1
    I had to add a statement with "/*" removed from the Resource. – ram4nd Mar 02 '17 at 08:39
  • If I remove the `/*` from the end of the resource string, the policy editor comes back with `Action does not apply to any resources` – timbo Jun 13 '19 at 00:32
  • 2
    In my case, I forgot to add `BUCKET_NAME/*` and only gave myself access to the root of the bucket. Thanks! – SovietFrontier Nov 06 '22 at 02:53
30

The issue is that the transfer was done according to this thread, which by itself is not an issue. The issue came from the previous developer not changing permissions on the files before transferring. This meant I could not manage any of the files, even though they were in my bucket.

Issue was solved by re-downloading the files cleanly from the previous bucket, deleting the old phantom files, re-uploading the fresh files and setting their permissions to allow public reading of the files.

Community
  • 1
  • 1
thatgibbyguy
  • 4,033
  • 3
  • 23
  • 39
  • As this is the accepted answer I'd just like to add that AWS s3 sync will not transfer the ACL setup for each object. There's a probability that the objects were set to public individually using the ACL list. This project https://github.com/cobbzilla/s3s3mirror offers the -C option which I didn't manage to make work. As a last resort you can set bucket policy for each folder inside your bucket allowing the Principal: * to GetObjects. – Djonatan Mar 20 '19 at 14:48
14

I had same problem just adding * at end of policy bucket resource solved it

{
  "Version":"2012-10-17",
  "Statement":[{
    "Sid":"PublicReadGetObject",
        "Effect":"Allow",
      "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::example-bucket/*"
      ]
    }
  ]
}
Manoj Rammoorthy
  • 1,384
  • 16
  • 16
  • 5
    Why is this downvoted? adding /* to the end of my resource fixed the issue. – Alkarin Apr 05 '19 at 21:40
  • i have no clue for which people have down voted its the actual answer though – Manoj Rammoorthy Apr 05 '19 at 23:08
  • @user3470929 I didn't downvote but it's getting downvotes because the original question already used `/*` - or at least they've edited to reflect that in the question. – emmdee Apr 15 '19 at 21:07
4

Here's the Bucket Policy I used to make index.html file inside my S3 Bucket accessible from the internet:

enter image description here

I also needed to go to Permissions -> "Block Public Access" and remove the block public access rules for the bucket. Like so:

enter image description here

Also make sure the access permissions for the individual Objects inside each bucket is open to the public. Check that here: enter image description here

Gene
  • 10,819
  • 1
  • 66
  • 58
1

One weird thing that fixed this for me after already setting up the correct permissions, was I removed the extension from the filename. So I had many items in the bucket all with the same permissions and some worked find and some returned 403. The only difference was the ones that didn't work had .png at the end of the filename. When I removed that they worked fine. No idea why.

andrewcockerham
  • 2,676
  • 3
  • 23
  • 19
1

Another "solution" here: I was using Buddy to automate uploading a github repo to an s3 bucket, which requires programmatic write access to the bucket. The access policy for the IAM user first looked like the following: (Only allowing those 6 actions to be performed in the target bucket).

    {
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:ListAllMyBuckets",
                "s3:ListBucket",
                "s3:DeleteObject",
                "s3:PutObjectAcl"
            ],
            "Resource": ""arn:aws:s3:::<bucket_name>/*"
        }
    ]
}

My bucket access policy was the following: (allowing read/write access for the IAM user).

{
"Version": "2012-10-17",
"Id": "1234",
"Statement": [
    {
        "Sid": "5678",
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::<IAM_user_arn>"
        },
        "Action": [
            "s3:DeleteObject",
            "s3:GetObject",
            "s3:GetObjectAcl",
            "s3:PutObject"
        ],
        "Resource": "arn:aws:s3:::<bucket_name>/*"
    }

However, this kept giving me the 403 error.

My workaround solution was to give the IAM user access to all s3 resources:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:ListAllMyBuckets",
                "s3:ListBucket",
                "s3:DeleteObject",
                "s3:PutObjectAcl"
            ],
            "Resource": "*"
        }
    ]
}

This got me around the 403 error, although clearly it doesn't sound like how it should be.

unie
  • 143
  • 10
0

For me, none of the other answers worked. File permissions, bucket policies, and clock were all fine. For me, the issue was intermittent, and while it may sound trite, the following have both worked for me previously:

  1. Log out and log back in.
  2. If you are trying to upload a single file, try to do a bulk upload. Conversely, if trying to upload a single file, try to do a bulk upload.
entpnerd
  • 10,049
  • 8
  • 47
  • 68
0

Just found the same issue on my side on my iPhone app. It was working completely fine with Android with same configuration and S3 setup but iPhone app was throwing an error. I reached Amazon support team with this issue, after checking logs on their end; they told me your iPhone has date and time. Then I went to settings of my iPhone and just adjusted correct date and time. Then I tried to upload new image and it worked as expected.

If you are having same issue and you have wrong date or time in your iphone or simulator; this may help you.

Thanks!

Jignesh Mayani
  • 6,937
  • 1
  • 20
  • 36
0

For me it was the Public access under Access Control tab.

just ensure the read and write permission under public access is Yes by default its - which means No.

Happy coding.

JFYI: am using flutter for my android development.

enter image description here

The Billionaire Guy
  • 3,382
  • 28
  • 31
0

Make sure you use the correct AWS Profile!!!! (dev \ prod etc...)

Yitzchak
  • 3,303
  • 3
  • 30
  • 50
0

I hit this error when trying to PUT a file to S3 from JavaScript using a URL presigned in Python. Turns out my Python needed the ContentType attribute.

Once I added that, the following worked:

import boto3
import requests

access_key_id = 'AKIA....'
secret_access_key = 'LfNHsQ....'
bucket = 'images-dev'
filename = 'pretty.png'

s3_client = boto3.client(
  's3',
  aws_access_key_id=access_key_id,
  aws_secret_access_key=secret_access_key
)

# sign url
response = s3_client.generate_presigned_url(
  ClientMethod = 'put_object',
  Params = {
    'Bucket': bucket,
    'Key': filename,
    'ContentType': 'image/png',
  }
)

print(" * GOT URL", response)

# NB: to run the PUT command in Python, one must remove the ContentType attr above!
# r = requests.put(response, data=open(filename, 'rb'))
# print(r.status_code)

Then one can PUT that image to S3 using that url from the client:

var xhr = new XMLHttpRequest();
xhr.open('PUT', url);
xhr.onreadystatechange = () => {
  if (xhr.readyState === 4) {
    if (xhr.status !== 200) {
      console.log('Could not upload file.');
    }
  }
};

xhr.send(file);
duhaime
  • 25,611
  • 17
  • 169
  • 224
0

In my case, I was generating a signed url for upload and was receiving a 403 error.

The API to generate the signed url was on running on an ECS cluster which had task role assigned. The task role did not have access to PutObjectAcl for public read of the file and hence was receiving a 403 error.

Updating the task role for the cluster fixed the issues.

TLDR: For public read, check if credentials/Role/policy have PutObjectAcl permissions.

Vaulstein
  • 20,055
  • 8
  • 52
  • 73
0

I'm not sure if this will help anyone, but we started getting "Access Forbidden" last week on code that had been working for months. I upgraded aws-sdk to v3 and had to create some new functions, and it started to work again.

0

Since August the end of June 2023, TLS 1.2 or later is enforced on S3. https://aws.amazon.com/blogs/security/tls-1-2-required-for-aws-endpoints/

If your application connects to S3 over HTTPS, make sure it is configuered to use TLS 1.2.
Applications that use older TLS versions will get the 403 error (Could happen with .Net 4.5 and lower for example).

For .Net applications an easy solution would be to set the application to use .Net 4.6.2 at least in the app or web config.

Yochai Timmer
  • 48,127
  • 24
  • 147
  • 185