17

Let me start of by saying that I am normally very reluctant to post this questions as I always feel that there's an answer to everything SOMEWHERE on the internet. After spending countless hours looking for an answer to this question, I've finally given up on this statement however.

Assumption

This works:

s3.getSignedUrl('putObject', params);

What am I trying to do?

  1. Upload a file via PUT (from the client-side) to Amazon S3 using the getSignedUrl method
  2. Allow anyone to view the file that was uploaded to S3

Note: If there's an easier way to allow client side (iPhone) uploads to Amazon S3 with pre-signed URLs (and without exposing credentials client-side) I'm all ears.

Main Problems*

  1. When viewing the AWS Management Console, the file uploaded has blank Permissions and Metadata set.
  2. When viewing the uploaded file (i.e. by double clicking the file in AWS Management Console) I get an AccessDenied error.

What have I tried?

Try #1: My original code

In NodeJS I generate a pre-signed URL like so:

var params = {Bucket: mybucket, Key: "test.jpg", Expires: 600};
s3.getSignedUrl('putObject', params, function (err, url){
  console.log(url); // this is the pre-signed URL
});

The pre-signed URL looks something like this:

https://mybucket.s3.amazonaws.com/test.jpg?AWSAccessKeyId=AABFBIAWAEAUKAYGAFAA&Expires=1391069292&Signature=u%2BrqUtt3t6BfKHAlbXcZcTJIOWQ%3D

Now I upload the file via PUT

curl -v -T myimage.jpg https://mybucket.s3.amazonaws.com/test.jpg?AWSAccessKeyId=AABFBIAWAEAUKAYGAFAA&Expires=1391069292&Signature=u%2BrqUtt3t6BfKHAlbXcZcTJIOWQ%3D

PROBLEM
I get the *Main Problems listed above

Try #2: Adding Content-Type and ACL on PUT

I've also tried adding the Content-Type and x-amz-acl in my code by replacing the params like so:

var params = {Bucket: mybucket, Key: "test.jpg", Expires: 600, ACL: "public-read-write", ContentType: "image/jpeg"};

Then I try a good ol' PUT:

curl -v -H "image/jpeg" -T myimage.jpg https://mybucket.s3.amazonaws.com/test.jpg?AWSAccessKeyId=AABFBIAWAEAUKAYGAFAA&Content-Type=image%2Fjpeg&Expires=1391068501&Signature=0yF%2BmzDhyU3g2hr%2BfIcVSnE22rY%3D&x-amz-acl=public-read-write

PROBLEM
My terminal outputs some errors:

-bash: Content-Type=image%2Fjpeg: command not found
-bash: x-amz-acl=public-read-write: command not found

And I also get the *Main Problems listed above.

Try #3: Modifying Bucket Permissions to be public

All of the items listed below are ticked in the AWS Management Console)

Grantee: Everyone can [List, Upload/Delete, View Permissions, Edit Permissions]
Grantee: Authenticated Users can [List, Upload/Delete, View Permissions, Edit Permissions]

Bucket Policy

{
"Version": "2012-10-17",
"Statement": [
    {
        "Sid": "Stmt1390381397000",
        "Effect": "Allow",
        "Principal": {
            "AWS": "*"
        },
        "Action": "s3:*",
        "Resource": "arn:aws:s3:::mybucket/*"
    }
]
}

Try #4: Setting IAM permissions

I set the user policy to be this:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:*",
      "Resource": "*"
    }
  ]
}

AuthenticatedUsers group policy to be this:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1391063032000",
      "Effect": "Allow",
      "Action": [
        "s3:*"
      ],
      "Resource": [
        "*"
      ]
    }
  ]
}

Try #5: Setting CORS policy

I set the CORS policy to this:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>DELETE</AllowedMethod>
        <AllowedMethod>GET</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

And... Now I'm here.

Chris Tan
  • 263
  • 1
  • 3
  • 8
  • Hi Chris, just wondering if you ever got it working with the getSignedUrl method (as the ideal). I know you accepted the answer from Amit but I want to solve the problem the same way you did just using getSignedUrl and when there is only public-read ACL (no public-write) then I just can't seem to get Amazon to accept the PUT. – Reinsbrain Feb 19 '15 at 22:36
  • @Reinsbrain It's been a long time since I tried this code but I was unable to get s3.getSignedUrl('putObject', params); to work the way I expected. (I emailed Amazon but never heard back.) I went with a solution similar to the one posted by Amit (in Node.js that I can send you if you need). I also never got a chance to try Praneeth's solution as this code was from a really long time ago. – Chris Tan Feb 20 '15 at 02:40
  • it is very strange that getSignedUrl doesn't seem to work with acl:public-read... probably it can work but the documentation is sorely lacking. My colleague attempted a solution like Amit's but referring to Amazon's documentation which one can describe a "mind bomb". If you've got something working in Node would be amazingly helpful to see how you pulled it off. Perhaps you might post it as an alternative answer - I will vote it up ;) Thanks Chris – Reinsbrain Feb 20 '15 at 14:46
  • I managed to get it working with getSignedUrl and the problem has to do with some headers. For the benefit of others having the same issue I'm going to post another answer to this issue – Reinsbrain Feb 24 '15 at 13:57

7 Answers7

14

Update

I have bad news. According to release notes of SDK 2.1.6 at http://aws.amazon.com/releasenotes/1473534964062833:

"The SDK will now throw an error if ContentLength is passed into an 
Amazon S3 presigned URL (AWS.S3.getSignedUrl()). Passing a 
ContentLength is not supported by the SDK, since it is not enforced on 
S3's side given the way the SDK is currently generating these URLs. 
See GitHub issue #457."

I have found on some occassions, ContentLength must be included (specifically if your client passes it so the signatures will match), then on other occassions, getSignedUrl will complain if you include ContentLength with a parameter error: "contentlength is not supported in presigned urls". I noticed that the behavior would change when I changed the machine which was making the call. Presumably the other machine made a connection to another Amazon server in the farm.

I can only guess why the behavior exists in some cases, but not in others. Perhaps not all of Amazon's servers have been fully upgraded? In either case, to handle this problem, I now make an attempt using ContentLength and if it gives me the parameter error, then I call the getSignedUrl again without it. This is a work-around to deal with this strange behavior with the SDK.

A little example... not very pretty to look at but you get the idea:

MediaBucketManager.getPutSignedUrl = function ( params, next ) {
    var _self = this;
    _self._s3.getSignedUrl('putObject', params, function ( error, data ) {
        if (error) {
            console.log("An error occurred retrieving a signed url for putObject", error);
            // TODO: build contextual error
            if (error.code == "UnexpectedParameter" && error.message.search("ContentLength") > -1) {
                if (params.ContentLength) delete params.ContentLength
                MediaBucketManager.getPutSignedUrl(bucket, key, expires, params, function ( error, data ) {
                    if (error) {
                        console.log("An error occurred retrieving a signed url for putObject", error);
                    } else {
                        console.log("Retrieved a signed url for putObject:", data);
                        return next(null, data)
                    }
                }); 
            } else {
                return next(error); 
            }
        } else {
            console.log("Retrieved a signed url for putObject:", data);
            return next(null, data);
        }
    });
};

So, below is not entirely correct (it will be correct in some cases but give you the parameter error in others) but might help you get started.

Old Answer

It seems (for a signedUrl to PUT a file to S3 where there is only public-read ACL) there are a few headers that will be compared when a request is made to PUT to S3. They are compared against what has been passed to getSignedUrl:

CacheControl: 'STRING_VALUE',
ContentDisposition: 'STRING_VALUE',
ContentEncoding: 'STRING_VALUE',
ContentLanguage: 'STRING_VALUE',
ContentLength: 0,
ContentMD5: 'STRING_VALUE',
ContentType: 'STRING_VALUE',
Expires: new Date || 'Wed De...'

see the full list here: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property

When you're calling getSignedUrl you'll pass a 'params' object (fairly clear in the documentation) that includes the Bucket, Key, and Expires data. Here is an (NodeJS) example:

var params = { Bucket:bucket, Key:key, Expires:expires };
s3.getSignedUrl('putObject', params, function ( error, data ) {
    if (error) {
        // handle error
    } else {
        // handle data
    }
});

Less clear is setting the ACL to 'public-read':

var params = { Bucket:bucket, Key:key, Expires:expires, ACL:'public-read' };

Very much obscure is the notion of passing headers that you expect the client, using the signed url, will pass along with the PUT operation to S3:

var params = {
    Bucket:bucket,
    Key:key,
    Expires:expires,
    ACL:'public-read',
    ContentType:'image/png',
    ContentLength:7469
};

In my example above, I have included ContentType and ContentLength because those two headers are included when using XmlHTTPRequest in javascript, and in the case of Content-Length cannot be changed. I suspect that will be the case for other implementations of HTTP requests like Curl and such because they are required headers when submitting HTTP requests that include a body (of data).

If the client does not include the ContentType and ContentLength data about the file when requesting a signedUrl, when it comes time to PUT the file to S3 (with that signedUrl), the S3 service will find the headers included with the client's requests (because they are required headers) but the signature will not have included them - and so, they will not match and the operation will fail.

So, it appears that you will have to know, in advance of making your getSignedUrl call, the content type and content length of the file to be PUT to S3. This wasn't a problem for me because I exposed a REST endpoint to allow our clients to request a signed url just before making the PUT operation to S3. Since the client has access to the file to be submitted (at the moment they are ready to submit), it was a trivial operation for the client to access the file size and type and request a signed url with that data from my endpoint.

Reinsbrain
  • 2,235
  • 2
  • 23
  • 35
  • Finally! An answer that directly answers the **What am I trying to do?** section properly e.g. "1) Upload a file via PUT (from the client-side) to Amazon S3 using the getSignedUrl method". I can't test this because I no longer have an s3 instance but based on your answer Praneeth's answer might work too. I am selecting yours as the new answer to this question though because it has an in-depth explanation (and ContentLength can be used to restrict the size of client side uploads). – Chris Tan Feb 25 '15 at 02:51
  • Thank you so much. Been stuck on this for days. Your params hash is what finally sorted the problem for me. – Loubot Nov 20 '17 at 14:45
  • I don't know how you managed to find this out, but thank you for sharing it. Unfortunately, getSignedUrl indeed does not accept Content-Length: https://github.com/aws/aws-sdk-js/issues/457#issuecomment-69824874 This is really annoying, given that the browser usually does send it. What appears to work for me for now is not specifying the ACL (public-read) in the getSignedUrl request, but simply setting the Bucket policy to make the entire bucket public. (At least, my PUT requests now no longer get rejected. Still have to successfully transfer a complete file though.) – Vincent Dec 07 '17 at 19:03
7

As per @Reinsbrain request, this is the Node.js version of implementing client side uploads to the server with "public-read" rights.

BACKEND (NODE.JS)

var AWS = require('aws-sdk');
var AWS_ACCESS_KEY_ID = process.env.S3_ACCESS_KEY;
var AWS_SECRET_ACCESS_KEY = process.env.S3_SECRET;
AWS.config.update({accessKeyId: AWS_ACCESS_KEY_ID, secretAccessKey: AWS_SECRET_ACCESS_KEY});
var s3 = new AWS.S3();
var moment = require('moment');
var S3_BUCKET = process.env.S3_BUCKET;
var crypto = require('crypto');
var POLICY_EXPIRATION_TIME = 10;// change to 10 minute expiry time
var S3_DOMAIN = process.env.S3_DOMAIN;

exports.writePolicy = function (filePath, contentType, maxSize, redirect, callback) {
  var readType = "public-read";

  var expiration = moment().add('m', POLICY_EXPIRATION_TIME);//OPTIONAL: only if you don't want a 15 minute expiry

  var s3Policy = {
    "expiration": expiration,
    "conditions": [
      ["starts-with", "$key", filePath],
      {"bucket": S3_BUCKET},
      {"acl": readType},
      ["content-length-range", 2048, maxSize], //min 2kB to maxSize
      {"redirect": redirect},
      ["starts-with", "$Content-Type", contentType]
    ]
  };

  // stringify and encode the policy
  var stringPolicy = JSON.stringify(s3Policy);
  var base64Policy = Buffer(stringPolicy, "utf-8").toString("base64");

  // sign the base64 encoded policy
  var testbuffer = new Buffer(base64Policy, "utf-8");

  var signature = crypto.createHmac("sha1", AWS_SECRET_ACCESS_KEY)
    .update(testbuffer).digest("base64");

  // build the results object to send to calling function
  var credentials = {
    url: S3_DOMAIN,
    key: filePath,
    AWSAccessKeyId: AWS_ACCESS_KEY_ID,
    acl: readType,
    policy: base64Policy,
    signature: signature,
    redirect: redirect,
    content_type: contentType,
    expiration: expiration
  };

  callback(null, credentials);
}

FRONTEND assuming the values from server are in input fields and that you're submitting images via a form submission (i.e. POST since I couldn't get PUT to work):

function dataURItoBlob(dataURI, contentType) {
  var binary = atob(dataURI.split(',')[1]);
  var array = [];
  for(var i = 0; i < binary.length; i++) {
    array.push(binary.charCodeAt(i));
  }
  return new Blob([new Uint8Array(array)], {type: contentType});
}

function submitS3(callback) {
  var base64Data = $("#file").val();//your file to upload e.g. img.toDataURL("image/jpeg")
  var contentType = $("#contentType").val();
  var xmlhttp = new XMLHttpRequest();
  var blobData = dataURItoBlob(base64Data, contentType);

  var fd = new FormData();
  fd.append('key', $("#key").val());
  fd.append('acl', $("#acl").val());
  fd.append('Content-Type', contentType);
  fd.append('AWSAccessKeyId', $("#accessKeyId").val());
  fd.append('policy', $("#policy").val());
  fd.append('signature', $("#signature").val());
  fd.append("redirect", $("#redirect").val());
  fd.append("file", blobData);

  xmlhttp.onreadystatechange=function(){
    if (xmlhttp.readyState==4) {
      //do whatever you want on completion
      callback();
    }
  }
  var someBucket = "your_bucket_name"
  var S3_DOMAIN = "https://"+someBucket+".s3.amazonaws.com/";
  xmlhttp.open('POST', S3_DOMAIN, true);
  xmlhttp.send(fd);
}

Note: I was uploading more than 1 image per submission so I added multiple iframes (with the FRONTEND code above) to do simultaneous multi-image uploads.

Chris Tan
  • 263
  • 1
  • 3
  • 8
  • 3
    This is a great example, too bad you won't find one like it on Amazon and we have to provide them. – Reinsbrain Feb 24 '15 at 15:10
  • Thanks for the dataURItoBlob piece, helped me out. – Udi May 27 '15 at 21:29
  • @Udi You're most certainly welcome! I spent a lot of time piecing together this solution and I'm extremely glad it's beneficial for others. :) If I remember correctly, the dataURItoBlob function came from here: http://stackoverflow.com/a/11954337/2060767 – Chris Tan May 29 '15 at 03:22
3

step 1: Set s3 policy:

{
    "expiration": "2040-01-01T00:00:00Z",
    "conditions": [
                    {"bucket": "S3_BUCKET_NAME"},
                    ["starts-with","$key",""],
                    {"acl": "public-read"},
                    ["starts-with","$Content-Type",""],
                    ["content-length-range",0,524288000]
                  ]
}

step 2: prepare aws keys,policy,signature, in this example, all stored at s3_tokens dictionary

the trick here is in the policy & signature policy: 1) save step 1 policy in a file. dump it to a json file. 2) base 64 encoded json file (s3_policy_json):

#python
policy = base64.b64encode(s3_policy_json)

signature:

#python
s3_tokens_dict['signature'] = base64.b64encode(hmac.new(AWS_SECRET_ACCESS_KEY, policy, hashlib.sha1).digest())

step 3: from your js

$scope.upload_file = function(file_to_upload,is_video) {
    var file = file_to_upload;
    var key = $scope.get_file_key(file.name,is_video);
    var filepath = null;
    if ($scope.s3_tokens['use_s3'] == 1){
       var fd = new FormData();
       fd.append('key', key);
       fd.append('acl', 'public-read'); 
       fd.append('Content-Type', file.type);      
       fd.append('AWSAccessKeyId', $scope.s3_tokens['aws_key_id']);
       fd.append('policy', $scope.s3_tokens['policy']);
       fd.append('signature',$scope.s3_tokens['signature']);
       fd.append("file",file);
       var xhr = new XMLHttpRequest();
       var target_url = 'http://s3.amazonaws.com/<bucket>/';
       target_url = target_url.replace('<bucket>',$scope.s3_tokens['bucket_name']);
       xhr.open('POST', target_url, false); //MUST BE LAST LINE BEFORE YOU SEND 
       var res = xhr.send(fd);
       filepath = target_url.concat(key);
    }
    return filepath;
};
Amit Talmor
  • 7,174
  • 4
  • 25
  • 29
  • 1
    This signing then POST method, when converted to NodeJS, works. However, it seems odd to me that I cannot get it working with the getSignedUrl method. Is there no way for me to correctly use the s3 getSignedUrl() method? – Chris Tan Jan 31 '14 at 06:04
1

You can in fact use getSignedURL as you specified above. Here's an example on how to both get a URL to read from S3, and also use getSignedURL for posting to S3. The files get uploaded with the same permissions as the IAM user that was used to generate the URLs. The problems you are noticing may be a function of how you are testing with curl? I uploaded from my iOS app using AFNetworking (AFHTTPSessionManager uploadTaskWithRequest). Here's an example on how to post using the signed URL: http://pulkitgoyal.in/uploading-objects-amazon-s3-pre-signed-urls/

var s3 = new AWS.S3();  // Assumes you have your credentials and region loaded correctly.

This is for reading from S3. URL will work for 60 seconds.

var params = {Bucket: 'mys3bucket', Key: 'file for temp access.jpg', Expires: 60};
var url = s3.getSignedUrl('getObject', params, function (err, url) {
          if (url) console.log("The URL is", url);
       });

This is for writing to S3. URL will work for 60 seconds.

        var key = "file to give temp permission to write.jpg";
        var params = {
            Bucket: 'yours3bucket',
            Key: key,
            ContentType: mime.lookup(key),      // This uses the Node mime library
            Body: '',
            ACL: 'private',
            Expires: 60
        };
        var surl = s3.getSignedUrl('putObject', params, function(err, surl) {
            if (!err) {
                console.log("signed url: " + surl);
            } else {
                console.log("Error signing url " + err);
            }
        });
Praneeth Wanigasekera
  • 946
  • 1
  • 10
  • 16
0

It sounds like you don't really need a signed URL, just that you want your uploads to be publicly viewable. If that's the case, you just need to go to the AWS console, choose the bucket you want to configure, and click on permissions. Then click the button that says 'add bucket policy' and input the following rule:

{
    "Version": "2008-10-17",
    "Id": "http referer policy example",
    "Statement": [
        {
            "Sid": "readonly policy",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::BUCKETNAME/*"
        }
    ]
}

where BUCKETNAME should be replaced with your own bucket's name. The contents of that bucket will be readable by anyone now, provided they have a direct link to a specific file.

Ari
  • 3,489
  • 5
  • 26
  • 47
  • Thanks for the reply. I tried changing my bucket policy as you suggested. The only change I noticed between your policy and mine was `"Principal": "*"` . Unfortunately I am still unable to upload the test.png . I am pretty sure my problem has to do with some kind of setting but I can't figure out which setting. Also, I am only making the bucket publicly viewable for the simplicity of the question (i.e. I will be implementing view restrictions once I figure out how to upload files properly.) – Chris Tan Jan 30 '14 at 10:46
0

Could you just upload using your PUT pre signed URL without worrying about permissions, but immediately create another pre signed URL with a GET method and infinite expiration, and provide that to the viewing public?

thund
  • 1,842
  • 2
  • 21
  • 31
  • When I used the getSignedUrl method, I had no permissions to view or edit the permissions of the file. The only permission I had was to delete it. This means that I cannot sign a GET method without permissions to it (the file). – Chris Tan Feb 13 '14 at 12:52
-2

Are you using the official AWS Node.js SDK? http://aws.amazon.com/sdkfornodejs/

Here's how I'm using it...

 var data = {
        Bucket: "bucket-xyz",
        Key: "uploads/" + filename,
        Body: buffer,
        ACL: "public-read",
        ContentType: mime.lookup(filename)
    };
 s3.putObject(data, callback);

And My uploaded files are public readable. Hope it helps.

nabeel
  • 931
  • 9
  • 16
  • 3
    Thanks for the reply. Unfortunately, this won't work for me as I want to do a straight upload from the client device to the S3 server. Users will be uploading videos so I am trying to reduce strain on my server. – Chris Tan Jan 30 '14 at 10:41
  • By the way, I'm using `"aws-sdk": "2.0.0-rc7"` @nabeel – Chris Tan Jan 30 '14 at 10:52
  • They also have client side SDKs... http://aws.amazon.com/sdkforbrowser/ http://aws.amazon.com/sdkforios/ – nabeel Jan 30 '14 at 12:21
  • The biggest problem with using the SDK is that clients circumvent my server meaning that they can upload any number of files any number of sizes (i.e. 20 x 1GB files) and I would get charged for those. Additionally, I have no record of them uploading the files if the client fails to notify the server unless I scrape AWS repeatedly. – Chris Tan Jan 30 '14 at 13:27