64

I've generated a presigned S3 POST URL. Using the return parameters, I then pass it into my code, but I keep getting this error Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource..

Whereas on Postman, I'm able to submit the form-data with one attached file.

On PostMan, I manually entered the parameters enter image description here

The same parameters are then entered into my code. enter image description here

ngzhongcai
  • 1,793
  • 3
  • 23
  • 31
  • The difference between what the browser is doing and Postman is that Postman is only doing the POST method. The browser is doing "preflight" which means that is does an OPTIONS request before the POST. It uses the CORS headers from the OPTIONS response to see if it should continue to do the POST. You can check what the OPTIONS response looks like in Postman. – David Fevre Jun 23 '23 at 04:23

14 Answers14

54

You must edit the CORS Configuration to be public , something like:

   <?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>POST</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>

enter image description here

kooskoos
  • 4,622
  • 1
  • 12
  • 29
Abdennour TOUMI
  • 87,526
  • 38
  • 249
  • 254
  • 4
    doesn't make sense, because I was able to upload via Postman. This is a POST request. Not a GET – ngzhongcai Dec 07 '17 at 07:15
  • Sorry, I mean CORS Configuration.. I updated the answer. Sorry again! – Abdennour TOUMI Dec 07 '17 at 07:17
  • 1
    Thank you for the tip. "Allowed Header: *" helped a little and I am getting some headway. What perplexed me is that the same thing worked in PostMan. But not on my code – ngzhongcai Dec 07 '17 at 07:34
  • 41
    @ngzhongcai fwiw, this works in postman and other test tools because postman is not a web site. CORS is a browser mechanism that prevents site A from making requests to site B (that's your bucket) unless site B agrees to accept the request from a viewer of site A via a "preflight check." The browser requires a CORS-friendly response when you're using one web site to access another but postman would not need/use/trigger this safety mechanism, because it's actually designed to protect both the user and site B from potentially malicious code on site A, and there is no site A under postman. – Michael - sqlbot Dec 07 '17 at 13:44
  • Thank you Michael! – ngzhongcai Dec 08 '17 at 04:34
  • @AbdennourTOUMI I have also faced the same issue, i could fix it by making allowedHeader to * . But I did not understand why we need to make it * and not any specific header as mentioned in https://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonRequestHeaders.html – Minions Aug 07 '18 at 04:03
  • 5
    This didn't work with AllowHeader * , did anyone fix this ? – Jeson Dias Dec 10 '18 at 04:41
  • 2
    Where to add? in S3 console / yourBucket / Permissions / CORS configuration and set h t t p s : //yourdomain.cloud in my case * was not enough... – oFace Jul 31 '19 at 10:08
  • 3
    It's funny how often people think postman proves there's no CORS error. – doug65536 Jul 27 '22 at 16:43
  • @doug65536 To be fair, we habitually think of security errors as something that must come from the server -- the client cannot be trusted. On top of that, the server does in fact set the CORS policy, so _of course_ it's a server-side concern, right? The idea that the server is setting a policy but the client's browser is enforcing it is extremely unintuitive. You have to sit down and think about the actual purpose of CORS (to protect the client against a malicious third party) to understand why it is the way it is. – Dausuul Jul 20 '23 at 16:24
39

Unable to comment so adding this here. Contains Harvey's answer, but in the form of a text to make it easy to copy.

[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "GET",
            "PUT",
            "POST"
        ],
        "AllowedOrigins": [
            "*"
        ],
        "ExposeHeaders": []
    }
]
Pranav Joglekar
  • 539
  • 6
  • 7
14

I encountered this issue as well. My CORs configuration on my bucket seemed correct yet my presigned URLs were hitting CORs problems. It turns out my AWS_REGION for my presigner was not set to the aws region of the bucket. After setting AWS_REGION to the correct region, it worked fine. I'm annoyed that the CORS issue was such a red herring to a simple problem and wasted several hours of my time.

Math is Hard
  • 896
  • 1
  • 12
  • 24
13

On my case I fixed it by having allowedMethods, and origins in S3. The menu is under the Permissions tab

enter image description here

[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "GET",
            "PUT",
            "POST"
        ],
        "AllowedOrigins": [
            "*"
        ],
        "ExposeHeaders": []
    }
]
Harvey Kadyanji
  • 515
  • 6
  • 8
  • 1
    This would be a better answer if it was text, not an image that others cannot easily copy from. – multithr3at3d May 27 '21 at 02:45
  • Hello, I am wondering in my case it doesn't work. I know CORS paramaters. I allowed all origin like above settings. But still getting CORS 403 error as a response of PUT request. – iamcrypticcoder Jul 21 '22 at 12:15
2

I used boto3 to add the cors policy, and this is what worked for me. Used the logic by @Pranav Joglekar

cors_configuration = {
        'CORSRules': [{
            'AllowedHeaders': ['*'],
            'AllowedMethods': ['GET', 'PUT', 'POST'],
            'AllowedOrigins': ['*'],
            'ExposeHeaders': [],
            'MaxAgeSeconds': 3000
        }]
    }
    s3_client = get_s3_client()
    s3_client.put_bucket_cors(Bucket='my_bucket_name',
                       CORSConfiguration=cors_configuration)
NullPointer
  • 241
  • 1
  • 2
  • 7
1

In my case I specifically needed to allow the PUT method in the S3 Bucket's CORS Configuration to use the presigned URL, not the GET method as in the accepted answer:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>PUT</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Conrad S
  • 155
  • 1
  • 7
1

My issue was that for some reason - getSignedUrl returned a url like so:

https://my-bucket.s3.us-west-2.amazonaws.com/bucket-folder/file.jpg

I've removed the region part - us-west-2 - and that fixed it ‍♂️

So instead it is now

https://my-bucket.s3.amazonaws.com/bucket-folder/file.jpg

Gal Bracha
  • 19,004
  • 11
  • 72
  • 86
1

My issue was I had a trailing slash (/) at the end of the domain in "AllowedOrigins". Once I removed the slash, requests worked.

echelon315
  • 117
  • 1
  • 6
  • Wonderful, all CORS config was right in our setup, yet it was not working, finally removing the trailing / solved our problem – Srisudhir T Nov 28 '22 at 04:43
1

I was getting similar CORS errors even with things properly configured.

Thanks to this answer, I discovered my Lambda@Edge that presigns was using a region that wasn't the right one for this bucket. (which was on us-east-1 for some default stack reason).

So I had to be explicit about the region when generating the presignedPost

reference: https://stackoverflow.com/a/13703595/11832970

  • It took me 1 whole day before getting this right. The CORS error was only confusing, and I should have focused on the 301 status – Raffael Campos Apr 20 '22 at 14:20
0

For me,it was because my bucket name had a hyphen in it (e.g. my-bucket). The signed URL would replace the hyphen in the bucket name with an underscore and then sign it. So this meant two things:

  1. CORS wouldn't work because the URL technically wasn't correct
  2. I couldn't just change the underscore back to the hyphen because then the signature would be wrong when AWS validated the signed URL

I eventually had to rename my bucket to something without a hyphen (e.g. mybucket) and then it worked fine with the following configuration:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <AllowedMethod>POST</AllowedMethod>
    <AllowedMethod>DELETE</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Josh Weston
  • 1,632
  • 22
  • 23
0

We have to specify only the required HTTP method. we were using the POST method for Presigned URL so removed the "GET" and "PUT" methods from "AllowedMethods"

[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "POST"
        ],
        "AllowedOrigins": [
            "*"
        ],
        "ExposeHeaders": []
    }
]
0

Check the url encoding. I had a url encoded version of the pre-resigned URL and that failed until I decoded it.

Doug
  • 665
  • 2
  • 8
  • 22
0

in my case URl was written as https=/www.xxx-qa.com. I changed it to https://www.xxx-qa.com and issue was resolved.

tushar
  • 1
0

I encountered this error when I had a bucket with dots (.) in name, like: cdn.dev.company.com (which was used with Cloudflare (not AWS Cloudfront) as CDN for serving media files). Below is python code snippet which was used by backend to generate preseigned URLs (that were used by frontend to upload video files directly to S3 bucket). Check the comment next to "client" variable line. In that configuration, it worked well (you also need to add CORS policy in bucket details - which is already described in this thread).

import boto3
from botocore.client import Config
from django.conf import settings

session = boto3.Session(
    aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
    aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY,
    region_name=settings.AWS_S3_REGION_NAME,
)

# you need to define endpoint_url and addressing_style as virtual in order 
# to generate URLs that are friendly for web-browser and work well with CORS
client = session.client(
    "s3",
    endpoint_url=f"https://s3.{settings.AWS_S3_REGION_NAME}.amazonaws.com",
    config=Config(s3={"addressing_style": "virtual"}),
)

bucket = settings.AWS_STORAGE_BUCKET_NAME
key = "file.png"
upload_id = "zxcv"
part_number = "abcd"
default_url_expiration = 1200

client.generate_presigned_url(
    ClientMethod="upload_part",
    Params={
        "Bucket": bucket,
        "Key": key,
        "UploadId": upload_id,
        "PartNumber": part_number,
    },
    ExpiresIn=default_url_expiration,
    HttpMethod="PUT",
)

michal-michalak
  • 827
  • 10
  • 6