3

I am trying to set up a Django app to allow CORS uploading to an Amazon S3 bucket--with 0 luck. I keep getting 403 Forbidden messages in both Firefox and Chrome, both on localhost and from a web server. I have verified that my signature is being calculated correctly (described below), and my bucket policy is to allow origin *, allow PUT / POST / GET methods, and allow headers *. My access key / secret key pair are correct and allow me to access the bucket through software like Cyberduck. What else can I check, or how can I debug this further? I feel like I've explored all possible avenues

I am trying to follow this demo, just modifying the encoding to match this SO question. This allows my signature to match this Amazon S3 test file. My signature generation code is here:

s3_bucket_name = settings.S3_BUCKET_NAME
s3_access_key = settings.S3_ACCESS_KEY
s3_secret_key = settings.S3_SECRET_KEY

object_name = request.GET.get('s3_object_name')
mime_type = request.GET.get('s3_object_type')

expires = int(time.time()+300)
amz_headers = "x-amz-acl:public-read"

put_request = "PUT\n\n%s\n%d\n%s\n/%s/%s" % (mime_type, expires, amz_headers, s3_bucket_name, object_name)

hashed = hmac.new(s3_secret_key, put_request, sha1)
signature = binascii.b2a_base64(hashed.digest())[:-1]
signature = urllib.quote_plus(signature.strip())

url = 'https://%s.s3.amazonaws.com/%s' % (s3_bucket_name, object_name)

signed_request = '%s?AWSAccessKeyId=%s&Expires=%d&Signature=%s' % (url, s3_access_key, expires, signature)

pdb.set_trace()

return HttpResponse(json.dumps({
    'signed_request': signed_request,
     'url': url
  }), mimetype='application/json')

My upload function:

function s3_upload(filename) {
filename = filename.split('\\').pop();
var s3upload = new S3Upload({
    file_dom_selector: '#video_file',
    s3_sign_put_url: 'signS3put/',
    s3_object_name: filename,
    onProgress: function(percent, message, publicUrl, file) {
      console.log('Upload progress: ', percent, message);
    },
    onFinishS3Put: function(public_url, file) { 
      console.log('Upload finished: ', public_url);
    },
    onError: function(status, file) {
      console.log('Upload error: ', status);
    }
});
}

And my file input button:

video_form.append('<input class="input-block-level"' +  
            'autocomplete=off id="video_file" name="video_file"' +  
            'type=file required="required" onchange="' + 
                    'var filename = $(this).val();' + 
                    's3_upload(filename);" />');

This happens to me both on local dev server and on a remote server, so it is not what this SO post says about Chrome blocking local server CORS calls.

Thanks for your tips!

Community
  • 1
  • 1
user
  • 4,651
  • 5
  • 32
  • 60

1 Answers1

5

So I solved this, and hopefully this will help some other people (nobody else seems to have this problem though, so maybe my setup is just different). My issue turned out to be with the s3upload.js library. Two things:

  1. I wanted to do authenticated-read, which in the example sign_s3 view was set to public-read. Just changing it in the view doesn't help--x-amz-acl:public-read is also hardcoded in s3upload.js, in the function uploadToS3 (line 107). So you have to change this to match your view, otherwise I think the signature encoding never matches the CORS request headers.
  2. The special characters =, +, and / were killing me. They are mentioned at the bottom of this Amazon S3 article as things to watch out for, so I made sure the hashing algorithm replaced them. The issue is that even though my hashing algorithm in my view was correctly replacing those three characters with the escape characters, s3upload was decoding and recoding them as =, +, and /. I changed one line in executeOnSignedUrl (line 71):

      return callback(result.signed_request, result.url);
      //return callback(decodeURIComponent(result.signed_request), result.url);
    

And now it magically works! Note that this is very similar to this SO solution, just specific to the s3upload.js library.

As a side note--S3 also does not seem to like spaces in the key names (I did read this somewhere in Amazon documentation, I think), which was a separate problem I had...

Community
  • 1
  • 1
user
  • 4,651
  • 5
  • 32
  • 60