36

I'm trying to generate a pre-signed URL then upload a file to S3 through a browser. My server-side code looks like this, and it generates the URL:

let s3 = new aws.S3({
  // for dev purposes
  accessKeyId: 'MY-ACCESS-KEY-ID',
  secretAccessKey: 'MY-SECRET-ACCESS-KEY'
});
let params = {
  Bucket: 'reqlist-user-storage',
  Key: req.body.fileName, 
  Expires: 60,
  ContentType: req.body.fileType,
  ACL: 'public-read'
};
s3.getSignedUrl('putObject', params, (err, url) => {
  if (err) return console.log(err);
  res.json({ url: url });
});

This part seems to work fine. I can see the URL if I log it and it's passing it to the front-end. Then on the front end, I'm trying to upload the file with axios and the signed URL:

.then(res => {
    var options = { headers: { 'Content-Type': fileType } };
    return axios.put(res.data.url, fileFromFileInput, options);
  }).then(res => {
    console.log(res);
  }).catch(err => {
    console.log(err);
  });
}

With that, I get the 403 Forbidden error. If I follow the link, there's some XML with more info:

<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your key and signing method.
</Message>
...etc
Glenn
  • 4,195
  • 9
  • 33
  • 41

13 Answers13

31

Your request needs to match the signature, exactly. One apparent problem is that you are not actually including the canned ACL in the request, even though you included it in the signature. Change to this:

var options = { headers: { 'Content-Type': fileType, 'x-amz-acl': 'public-read' } };
Michael - sqlbot
  • 169,571
  • 25
  • 353
  • 427
  • 2
    Thanks, this wasn't it, but I think I'm getting a better feel for what the issue is. I'm pretty sure something isn't matching and I'm just not figuring out exactly what. – Glenn Oct 22 '17 at 03:41
  • 1
    That should be part of it, though. This is a very deterministic process with zero tolerance for inconsistency. The `CanonicalRequest` in the error message will tell you, indirectly, what parameters you *should have* passed to `getSignedUrl()` based on the request you are actually sending. – Michael - sqlbot Oct 22 '17 at 04:00
  • 3
    Thank you, finally nailed down the right headers and made a successful put request. – Glenn Oct 28 '17 at 22:46
  • 5
    What headers did u put. I am facing the same issue. – Arunkumar Papena Oct 21 '18 at 17:56
  • 31
    @Alan could you please share how you solve the problem? that way others with the same problem could find a solution thanks. – Carlos Salazar Apr 29 '19 at 15:14
  • Finally I worked it out, at least in my situation. In the GetPreSignedUrlRequest I had ContentType = "binary/octet-stream", but in the HttpWebRequest I had ContentType = "application/octet-stream". Mismatching got 403; omitting both got 403; matching both got 200. – GeoffM May 27 '22 at 18:17
23

Receiving a 403 Forbidden error for a pre-signed s3 put upload can also happen for a couple of reasons that are not immediately obvious:

  1. It can happen if you generate a pre-signed put url using a wildcard content type such as image/*, as wildcards are not supported.

  2. It can happen if you generate a pre-signed put url with no content type specified, but then pass in a content type header when uploading from the browser. If you don't specify a content type when generating the url, you have to omit the content type when uploading. Be conscious that if you are using an upload tool like Uppy, it may attach a content type header automatically even when you don't specify one. In that case, you'd have to manually set the content type header to be empty.

In any case, if you want to support uploading any file type, it's probably best to pass the file's content type to your api endpoint, and use that content type when generating your pre-signed url that you return to your client.

For example, generating a pre-signed url from your api:

const AWS = require('aws-sdk')
const uuid = require('uuid/v4')

async function getSignedUrl(contentType) {
    const s3 = new AWS.S3({
        accessKeyId: process.env.AWS_KEY,
        secretAccessKey: process.env.AWS_SECRET_KEY
    })
    const signedUrl = await s3.getSignedUrlPromise('putObject', {
        Bucket: 'mybucket',
        Key: `uploads/${uuid()}`,
        ContentType: contentType
    })

    return signedUrl
}

And then sending an upload request from the browser:

import Uppy from '@uppy/core'
import AwsS3 from '@uppy/aws-s3'

this.uppy = Uppy({
    restrictions: {
        allowedFileTypes: ['image/*'],
        maxFileSize: 5242880, // 5 Megabytes
        maxNumberOfFiles: 5
    }
}).use(AwsS3, {
    getUploadParameters(file) {
        async function _getUploadParameters() {
            let signedUrl = await getSignedUrl(file.type)
            return {
                method: 'PUT',
                url: signedUrl
            }
        }

        return _getUploadParameters()
    }
})

For further reference also see these two stack overflow posts: how-to-generate-aws-s3-pre-signed-url-request-without-knowing-content-type and S3.getSignedUrl to accept multiple content-type

Roman Scher
  • 1,162
  • 2
  • 14
  • 18
  • 1
    Thank you for this. Both PAW and JS fetch were adding default content-types to my headers - I didn't think to check! – redPanda Jul 01 '21 at 13:43
  • 1
    This answer is a life saver! Thank you so much, I wasted too much time on this – ofirski Dec 20 '22 at 09:01
10

If you're trying to use an ACL, make sure that your Lambda IAM role has the s3:PutObjectAcl for the given Bucket and also that your bucket allows for the s3:PutObjectAcl for the uploading Principal (user/iam/account that's uploading).

This is what fixed it for me after double checking all my headers and everything else.

Inspired by this answer https://stackoverflow.com/a/53542531/2759427

Cobertos
  • 1,953
  • 24
  • 43
6

1) You might need to use S3V4 signatures depending on how the data is transferred to AWS (chunk versus stream). Create the client as follows:

var s3 = new AWS.S3({
  signatureVersion: 'v4'
});

2) Do not add new headers or modify existing headers. The request must be exactly as signed.

3) Make sure that the url generated matches what is being sent to AWS.

4) Make a test request removing these two lines before signing (and remove the headers from your PUT). This will help narrow down your issue:

  ContentType: req.body.fileType,
  ACL: 'public-read'
John Hanley
  • 74,467
  • 6
  • 95
  • 159
  • Thanks, I ran through this list, and I believe maybe there is a header mis-match. How do I compare the headers in the signed url vs the headers I'm sending? – Glenn Oct 22 '17 at 03:44
  • 1
    You specify HTTP headers in your params when calling getSignedUrl. Also note Michael sqlbot's reply with x-amz-acl header. Michael's params are correct. Make sure that you include the same headers in your PUT. – John Hanley Oct 22 '17 at 04:27
  • One additional item. Add Content-Length with the filesize when both signing and calling put. I don't see a region being specified. Add this when creating the client. – John Hanley Oct 22 '17 at 04:34
6

Had the same issue, here is how you need to solve it,

  1. Extract the filename portion of the signed URL. Do a print that you are extracting your filename portion correctly with querystring parameters. This is critical.
  2. Encode to URI Encoding of the filename with query string parameters.
  3. Return the url from your lambda with encoded filename along with other path or from your node service.

Now post from axios with that url, it will work.

EDIT1: Your signature will also be invalid, if you pass in wrong content type.

Please ensure that the content-type you have you create the pre-signed url is same as the one you are using it for put.

Hope it helps.

Kannaiyan
  • 12,554
  • 3
  • 44
  • 83
  • 1
    Thanks, I gave this a try, but I don't believe this is the issue. The URL already appears to be properly encoded, and if I try to "re-encode" it, it actually becomes a dead link. – Glenn Oct 22 '17 at 03:38
  • Can you please confirm that your content-type is same as with axios and then you have created your signedurl? – Kannaiyan Oct 22 '17 at 05:12
3

Did you add the CORS policy to the S3 bucket? This fixed the problem for me.

[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "PUT"
        ],
        "AllowedOrigins": [
            "*"
        ],
        "ExposeHeaders": []
    }
]
Vikas
  • 626
  • 1
  • 10
  • 22
2

This code was working with credentials and a bucket I created several years ago, but caused a 403 error on recently created credentials/buckets:

const s3 = new AWS.S3({
  region: region,
  accessKeyId: process.env.AWS_ACCESS_KEY,
  secretAccessKey: process.env.AWS_SECRET_KEY,
})

The fix was simply to add signatureVersion: 'v4'.

const s3 = new AWS.S3({
  signatureVersion: 'v4',
  region: region,
  accessKeyId: process.env.AWS_ACCESS_KEY,
  secretAccessKey: process.env.AWS_SECRET_KEY,
})

Why? I don't know.

pa1nd
  • 392
  • 3
  • 16
2

As others have pointed out the solution is to add the signatureVerision.

const s3 = new AWS.S3(
  {
    apiVersion: '2006-03-01',
      signatureVersion: 'v4'
  }
);

There is very detailed discussion around the same take a look https://github.com/aws/aws-sdk-js/issues/468

rbansal
  • 1,184
  • 14
  • 21
1

TLDR: Check that your bucket exists and is accessible by the AWS Key that is generating the Signed URL..

All of the answers are very good and most likely are the real solution, but my issue actually stemmed from S3 returning a Signed URL to a bucket that didn't exist.

Because the server didn't throw any errors, I had assumed that it must be the upload that was causing the problems without realizing that my local server had an old bucket name in it's .env file that used to be the correct one, but has since been moved.

Side note: This link helped https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/

It was while checking the uploading users IAM policies that I discovered that the user had access to multiple buckets, but only 1 of those existed anymore.

jrose
  • 549
  • 6
  • 11
1

I encountered the same error twice with different root causes / solutions:

  1. I was using generate_presigned_url. The solution for me was switching to generate_presigned_post (doc) which returns a host of essential information such as
    "url":"https://xyz.s3.amazonaws.com/",
    "fields":{
      "key":"filename.ext",
      "AWSAccessKeyId":"ASIAEUROPRSWEDWOMM",
      "x-amz-security-token":"some-really-long-string",
      "policy":"another-long-string",
      "signature":"the-signature"
     }

Add these fields to your request headers, don't forget to keep file last!

  1. That time I forgot to give proper permissions to the Lambda. Interestingly, Lambda can create good looking signed upload URLs which you won't have permission to use. The solution is to enrich the policy with S3 actions:
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::my-own-bucket/*"
            ]
        }
MonoThreaded
  • 11,429
  • 12
  • 71
  • 102
0

Iv'e updated the cors permissions in the bucket settings to allow all origins with all methods and it worked for me. Go to S3 > Bucket > Permissions > CORS > Edit And add a json like that https://docs.aws.amazon.com/AmazonS3/latest/userguide/ManageCorsUsing.html#cors-example-1 hope it helps

-1

using python boto3 when you upload a file the permissions are private by default. you can make the object public using ACL='public-read'

s3.put_object_acl(
     Bucket='gid-requests', Key='potholes.csv', ACL='public-read')
Golden Lion
  • 3,840
  • 2
  • 26
  • 35
-2

I did all that's mentioned here and allowed these permissions for it to work:
enter image description here

AG_HIHI
  • 1,705
  • 5
  • 27
  • 69