37

Am trying to upload my file via:

console.log("not broken til here");
    scope.inputMemeIsFile=true;
    var bucket = new AWS.S3({params: {Bucket: 'townhall.images'}});
    file = image.file;
    console.log(file);

    var params = {Key: file.name, ContentType: file.type, Body: file};
      bucket.upload(params, function (err, data) {
        var result = err ? 'ERROR!' : 'UPLOADED.';
        console.log(result);
        console.log(err);
      });

However, am getting the following error:

XMLHttpRequest cannot load https://s3.amazonaws.com/<BUCKETNAME>/favicon.jpg. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://127.0.0.1:5000' is therefore not allowed access.

with the proceedingError: Network Failure {message: "Network Failure", code: "NetworkingError", time: Tue Feb 17 2015 13:37:06 GMT-0500 (EST), region: "us-east-1", hostname: "s3.amazonaws.com"…}

My CORS config looks like the following and I have tried a couple things with no luck.

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>http://*</AllowedOrigin>
        <AllowedOrigin>https://*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

Anyone have any idea whats wrong? I've looked at 5-6 similar posts but no one seems to be able to solve the problem.

John
  • 1,677
  • 4
  • 15
  • 26

5 Answers5

57

In order to upload files via browser, you should ensure that you have configured CORS for your Amazon S3 bucket and exposed the "ETag" header via the ETag declaration.

I would suggest you start with an open test configuration and then modifying it to your needs:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>HEAD</AllowedMethod>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <AllowedMethod>POST</AllowedMethod>
    <AllowedMethod>DELETE</AllowedMethod>
    <AllowedHeader>*</AllowedHeader>
    <ExposeHeader>ETag</ExposeHeader>
  </CORSRule>
</CORSConfiguration>

Then check your bucket permissions and your AWS configuration (accessKeyId, secretAccessKey, and region) since none of these are present in your snippet.

For testing, go to your IAM Management Console and create a new IAM user named prefix-townhall-test then create a group with this simple policy that grants access to a bucket:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],
      "Resource": ["arn:aws:s3:::test-bucket-name"]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject"
      ],
      "Resource": ["arn:aws:s3:::test-bucket-name/*"]
    }
  ]
}

Make sure the user you created is using the new group with this policy.

Now create a simple test script like the one used on amazon this:

HTML

<input id="file-chooser" type="file" />
<button id="upload-button">Upload</button>
<p id="results"></p>

CODE (on DOM ready)

// update credentials
var credentials = {accessKeyId: 'new accessKeyId', secretAccessKey: 'new secretAccessKey'};
AWS.config.update(credentials);
AWS.config.region = 'us-west-1';

// create bucket instance
var bucket = new AWS.S3({params: {Bucket: 'test-bucket-name'}});

var fileChooser = document.getElementById('file-chooser');
var button = document.getElementById('upload-button');
var results = document.getElementById('results');
button.addEventListener('click', function() {
    var file = fileChooser.files[0];
    if (file) {
        results.innerHTML = '';

        var params = {Key: file.name, ContentType: file.type, Body: file};
        bucket.upload(params, function (err, data) {
            results.innerHTML = err ? 'ERROR!' : 'UPLOADED.';
        });
    } else {
        results.innerHTML = 'Nothing to upload.';
    }
}, false);
Zeeshan Hassan Memon
  • 8,105
  • 4
  • 43
  • 57
jungy
  • 2,932
  • 2
  • 18
  • 17
  • 2
    Have tried the ETag variation too and the one you stated above, same problem. – John Feb 17 '15 at 19:39
  • In terms of the config... . Any idea what I am doing wrong? – John Feb 17 '15 at 19:40
  • In terms of permissions, I have granted everyone access to list, upload/delete, view permissions – John Feb 17 '15 at 19:42
  • What information would be helpful to help resolve the problem? – John Feb 17 '15 at 19:43
  • @user3525295 what are the IAM permissions set for the credentials used for your AWS configuration? Try this simple test. Make a new IAM user and grant them full access to S3 and use it's `accessKeyId` and `secretAccessKey` for your test. If you're using US-Standard as your region then leave region out of the AWS config. – jungy Feb 17 '15 at 20:07
  • @user3525295 can you tell me the bucket region you are using? – jungy Feb 17 '15 at 20:21
  • Theres a chance that this might be right... How do I grant full access to S3 on a user? I did it on my s3 bucket already where I granted permission to everyone. Bucket: townhall.images Region: Northern California Creation Date: Mon Feb 16 15:34:50 GMT-500 2015 Owner: Me – John Feb 17 '15 at 20:28
  • I'm pretty sure if it was an access key problem that would prompt an invalid access key problem as well (which would be an xml output). I've seen it when I tried to use other methods to do this. – John Feb 17 '15 at 20:42
  • 1
    @user3525295 I've updated my answer to give you a small test environment. It works for me with javascript AWS SDK and allows me to upload just fine. If you want, I can even put the example in a fiddle but, will probably delete the user eventually. – jungy Feb 17 '15 at 20:44
  • Could you put into a JS fiddle? I just tried and am encountering the same error :( – John Feb 17 '15 at 20:56
  • 2
    @user3525295 your error seems to be stating that the bucket is isn't in `us-east-1`. Can you make sure the request is going to `us-west` or whichever Northern California is. Let me make a test bucket in Northern California and test it out. – jungy Feb 17 '15 at 20:59
  • It seemed to be a problem with the bucket, i remade antoher one in US Standard and it worked instead of US region – John Feb 17 '15 at 21:06
  • 1
    @user3525295 I just tested in Northern California and it worked fine with `AWS.config.region = 'us-west-1';` and the bucket using the example CORS configuration. – jungy Feb 17 '15 at 21:12
  • Thanks for this answer @jungy, ended my 2 day annoyance! – Christopher Grigg Jun 27 '16 at 00:35
15

Some browsers, such as Chrome, do not support localhost or 127.0.0.1 for CORS requests.

Try using instead: http://lvh.me:5000/

See https://stackoverflow.com/a/10892392/1464716 for more.

Community
  • 1
  • 1
Edu Lomeli
  • 2,263
  • 20
  • 18
  • 1
    Just tired, same effect. This was a good suggestion though and one I have not tried, any other suggestions? – John Feb 17 '15 at 19:38
  • This actually worked for me. I tried with the suggested CORS header in Safari instead of Chrome and voila. – jimh Oct 22 '19 at 07:10
  • 3
    This worked for me. In my search to understand `http://lvh.me`, I discovered `localho.st`, which is more self-documenting in my CORS config. – Jonathan Wilson Jun 04 '20 at 19:43
8

The current answer is pretty outdated. So here's my sharing:

First you need to setup CORS on your AWS S3 bucket

[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "GET",
            "PUT",
            "POST",
            "DELETE",
            "HEAD"
        ],
        "AllowedOrigins": [
            "http://*",
            "https://*"
        ],
        "ExposeHeaders": [
            "Access-Control-Allow-Origin",
            "ETag"
        ],
        "MaxAgeSeconds": 3000
    }
]

Just a note on the cors for S3. When applying a policy, files already in the bucket are NOT updated. So make sure you apply a CORS policy to only new buckets, or re-add the content after applying the policy. Old content won't be affected by new CORS policy

How amazon S3 evaluate CORS?

  • The request's Origin header must match an AllowedOrigin element.
  • The request method (for example, GET or PUT) or the Access-Control-Request-Method header in case of a preflight OPTIONS request must be one of the AllowedMethod elements.
  • Every header listed in the request's Access-Control-Request-Headers header on the preflight request must match an AllowedHeader element.

For the last step, You need to clear cache on browser also because browser might cache the previous pre-flight request

Alphapico
  • 2,893
  • 2
  • 30
  • 29
2

Try <AllowedOrigin>*</AllowedOrigin>, without protocol.

If it has no effect – you probably have problem on client side.

ermouth
  • 835
  • 7
  • 12
  • I've tried it already. Could you elaborate on how you would solve the client side problem? – John Feb 17 '15 at 19:38
  • 3
    Possibly, your client cached in some way `ZillionSeconds` if you had it in CORSRules section. Even if you delete it after client cached its value, browser do not try to re-read CORS using OPTIONS request until ZillionSeconds period ends. To ensure OPTIONS request you may a) clear user agent cache, b) for Chrome – open console, then settings and set No cache while console is open. – ermouth Feb 17 '15 at 19:45
  • I have my javascript console on and the cache is deleted. – John Feb 17 '15 at 19:46
2

Have you tried specifying your origin instead of using wildcard. I'm pretty sure we had similar problems in the past.

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>http://127.0.0.1:5000</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>
Jimmy Bernljung
  • 429
  • 2
  • 8