31

I am using NodeJs to upload files to AWS S3. I want the client to be able to download the files securely. So I am trying to generate signed URLs, that expire after one usage. My code looks like this:

Uploading

const s3bucket = new AWS.S3({
    accessKeyId: 'my-access-key-id',
    secretAccessKey: 'my-secret-access-key',
    Bucket: 'my-bucket-name',
})
const uploadParams = {
    Body: file.data,
    Bucket: 'my-bucket-name',
    ContentType: file.mimetype,
    Key: `files/${file.name}`,
}
s3bucket.upload(uploadParams, function (err, data) {
    // ...
})

Downloading

const url = s3bucket.getSignedUrl('getObject', {
    Bucket: 'my-bucket-name',
    Key: 'file-key',
    Expires: 300,
})

Issue

When opening the URL I get the following:

This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
    <Code>AccessDenied</Code>
    <Message>
        There were headers present in the request which were not signed
    </Message>
    <HeadersNotSigned>host</HeadersNotSigned>
    <RequestId>D63C8ED4CD8F4E5F</RequestId>
    <HostId>
        9M0r2M3XkRU0JLn7cv5QN3S34G8mYZEy/v16c6JFRZSzDBa2UXaMLkHoyuN7YIt/LCPNnpQLmF4=
    </HostId>
</Error>

I coultn't manage to find the mistake. I would really appreciate any help :)

Florian Ludewig
  • 4,338
  • 11
  • 71
  • 137
  • 2
    Anyone with valid security credentials can create a pre-signed URL. However, in order for you to successfully upload an object, the pre-signed URL must be created by someone who has permission to perform the operation that the pre-signed URL is based upon https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html. Does your IAM policy have permissions to access S3 bucket? If the file is successfully generated in your bucket and immediately after you create a signedUrl you are not able to access it check the filename and bucket you are passing to ```getSignedUrl ``` are valid – Oscar Nevarez Oct 05 '18 at 15:07
  • Is there a way to check if the IAM policy has permissions to access the bucket ? – Florian Ludewig Oct 05 '18 at 15:10
  • 1
    Indeed, a quick way to check this is just to look at your bucket and confirm the object has been created. since you´re getting a ```AccessDenied```response try checking your bucket permissions and allow the user to read and view (enable read and view permissions). – Oscar Nevarez Oct 05 '18 at 15:18
  • http://prntscr.com/l2lkwb the account has permissions – Florian Ludewig Oct 05 '18 at 15:23
  • you can grant the role AmazonS3FullAccess permission. if it works, then you know that the problem lies with the access permission granted to the role. delete AmazonS3FullAccess and grant GetObject to your bucket and try it out. if it still does not work then you will have to do some research to find out which permissions you need and also check that you are using the correct resource (i.e bucket) – Clive Sargeant Jun 17 '19 at 14:19

10 Answers10

46

Your code is correct, double check the following things:

  1. Your bucket access policy.

  2. Your bucket permission via your API key.

  3. Your API key and secret.

  4. Your bucket name and key.

For bucket policy you can use the following:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::bucket/*"
        }
    ]
}

Change bucket with your bucket name.

For users and access key permission (#2), you should follow these steps:

1-Goto AWS Identity and Access Management (IAM) and click on Policies link and click on "Create policy" button.

enter image description here

2-Select the JSON tab.

enter image description here

3-Enter the following statement, make sure change the bucket name and click on "review policy" button.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::YOURBUCKETNAME"
        }
    ]
}

enter image description here

4-Enter a name for your policy and click on "Create policy" button.

enter image description here

5-Click on Users link, and find your current username (You already have the access key and secret for that)

enter image description here

6-Click on "add permission" button.

enter image description here

7-Add the policy we created in the previous step and save.

enter image description here

Finally, make sure your bucket not accessible from Public, add the correct content type to your file and set signatureVersion: 'v4'

The final code should be like this, thanks @Vaisakh PS:

const s3bucket = new AWS.S3({
    signatureVersion: 'v4',
    accessKeyId: 'my-access-key-id',
    secretAccessKey: 'my-secret-access-key',
    Bucket: 'my-bucket-name',
})
const uploadParams = {
    Body: file.data,
    Bucket: 'my-bucket-name',
    ContentType: file.mimetype,
    Key: `files/${file.name}`,
}
s3bucket.upload(uploadParams, function (err, data) {
    // ...
})
const url = s3bucket.getSignedUrl('getObject', {
    Bucket: 'my-bucket-name',
    Key: 'file-key',
    Expires: 300,
})
Reza Mousavi
  • 4,420
  • 5
  • 31
  • 48
  • I didn't put anything into the Bucket Policy... But I am also not sure what to put there – Florian Ludewig Oct 08 '18 at 10:42
  • I updated the policy (http://prntscr.com/l3jj8q) but I am still getting the same error. I am pretty sure the 3rd and 4th point are correct. But I never used an API key – Florian Ludewig Oct 08 '18 at 11:54
  • So, make sure about your #2 – Reza Mousavi Oct 08 '18 at 12:32
  • The documentation states nothing about an API key (https://aws.amazon.com/sdk-for-node-js/?nc1=h_ls) or where is this key supposed to be? – Florian Ludewig Oct 08 '18 at 12:37
  • 1
    The answer updated with #2 point, if not work, test all steps again, maybe you missed one step. – Reza Mousavi Oct 08 '18 at 13:11
  • Thank you for your detailed explanation! Unfortunately, I am getting the same error again. Can it be caused by the fact, that I had to configure `signatureVersion` to `v4`? (Otherwise, I am getting this error: http://prntscr.com/l3l39v) – Florian Ludewig Oct 08 '18 at 13:40
  • Could you update your question and share the whole part of the code, and as I mentioned before, if not work, test all steps again, maybe you missed one step – Reza Mousavi Oct 08 '18 at 13:45
  • I don't know the exact cause of the issue. I refactored everything and created a new bucket etc., but now it's working. Thank you so much! – Florian Ludewig Oct 08 '18 at 18:06
  • Unfortunately, when downloading the files via the signed url I get a warning from Chrom that the file is dangerous (http://prntscr.com/l3osrc) Do you know reason for this? P.S.: the normal url works without warning – Florian Ludewig Oct 08 '18 at 18:08
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/181490/discussion-between-reza-mousavi-and-florian-ludewig). – Reza Mousavi Oct 08 '18 at 18:18
  • 2
    This bucket policy doesn't achieve the stated aim "I want the client to be able to download the files securely". That bucket policy makes all files public! Your answer is excellent, but now that troubleshooting is completed it would be best to go back and tighten up that policy or add a warning that it allows every principal access to GetObject. – JimmyL Oct 10 '18 at 23:53
  • It's(bucket policy) about everyone already have access to the bucket, not everyone on the Internet. – Reza Mousavi Oct 11 '18 at 06:23
37

Highest upvoted answer here technically works but isn't practical since it's opening up the bucket to be public.

I had the same problem and it was due to the role that was used to generate the signed url. The role I was using had this:

- Effect: Allow
  Action: 
    - "s3:ListObjects"
    - "s3:GetObject"
    - "s3:GetObjectVersion"
    - "s3:PutObject"
  Resource:
    - "arn:aws:s3:::(bucket-name-here)"

But the bucket name alone wasn't enough, I had to add a wildcard on the end to designate access to whole bucket:

- Effect: Allow
  Action: 
    - "s3:ListObjects"
    - "s3:GetObject"
    - "s3:GetObjectVersion"
    - "s3:PutObject"
  Resource:
    - "arn:aws:s3:::(bucket-name-here)/*"
Spencer Sutton
  • 2,907
  • 2
  • 18
  • 19
  • Thanks a lot, that was exactly the issue I faced – Wawa08 Feb 17 '23 at 06:35
  • In addition to this answer I think it's useful to know (as described in [this answer](https://stackoverflow.com/questions/54825501/aws-s3-presigned-url-contains-x-amz-security-token/54826131#54826131)) that credentials from your environment or command line might be used (and they might not be the ones you think you're using). – Gorgsenegger Mar 08 '23 at 22:26
  • Thanks for your infos, leading me to me own solution. When you grant e.g. your lambda permissions to an S3 bucket. You need to use either grantRead() or grantReadWrite(). grantWrite() is not including the read permissions. – Dukeatcoding May 31 '23 at 14:02
5

I battled with this as well with an application using Serverless Framework.

My fix was adding S3 permissions to the IAM Role inside of the serverless.yml file.

I'm not exactly sure how s3 makes the presigned URL but it turns out they take your IAM role into account.

Adding all s3 actions did the trick. This is what the IAM role looks like for S3

iamRoleStatements:
  - Effect: Allow
      Action:
        - 's3:*'
      Resource:
        - 'arn:aws:s3:::${self:custom.imageBucket}/*'
DylanA
  • 141
  • 2
  • 2
  • 2
    Nice hint, but it should be inside `provider`, like this:`provider: name: aws iamRoleStatements: - Effect: Allow Action: - 's3:*' Resource: - 'arn:aws:s3:::${env:S3_BUCKET_NAME}/*'` – Leopoldo Varela Sep 11 '21 at 20:30
  • This did the trick for me. The only permission the Lambda had as `s3:PutObject`. As soon as I changed it to `s3:*`. It would be nice to narrow down exactly what are the permissions needed, as I feel that `s3:*` is too permissive. If I found out I'll post another comment here. Thanks – Jose Quijada Jan 06 '22 at 03:57
  • Update: I was missing `s3:GetObject`. This is full policy for Lambda access to bucket, `new iam.PolicyStatement({ effect: iam.Effect.ALLOW, actions: ['s3:PutObject', 's3:GetObject'], resources: [`${imageBucket.bucketArn}/*`] })` – Jose Quijada Jan 06 '22 at 04:01
  • Another thing I was doing wrrong is that the path to the object (the Key) in the params was wrong, `s3Client.getSignedUrlPromise('getObject', { Bucket: bucket, Key: path, Expires: expires })`. Once I fixed it it worked. So in summary, I had two issues, (1) lacking getObject S3 permission on the Lambda role, and (2) wrong S3 object path was given to `S3.getSignedUrlPromise()`. Cheers and have a nice day! – Jose Quijada Jan 06 '22 at 14:29
4

Your code looks good but I think you are missing the signatureVersion: 'v4' parameter while creating the s3bucket object. Please try the below updated code.

const s3bucket = new AWS.S3({
    signatureVersion: 'v4',
    accessKeyId: 'my-access-key-id',
    secretAccessKey: 'my-secret-access-key',
    Bucket: 'my-bucket-name',
})
const uploadParams = {
    Body: file.data,
    Bucket: 'my-bucket-name',
    ContentType: file.mimetype,
    Key: `files/${file.name}`,
}
s3bucket.upload(uploadParams, function (err, data) {
    // ...
})
const url = s3bucket.getSignedUrl('getObject', {
    Bucket: 'my-bucket-name',
    Key: 'file-key',
    Expires: 300,
})

For more about signatureVersion: 'v4' see the below links

https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html

https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html

You can also try out the below nodejs library that create presigned url

https://www.npmjs.com/package/aws-signature-v4

Vaisakh PS
  • 1,181
  • 2
  • 10
  • 19
4

I kept having a similar problem but mine were due to region settings. In our back end we had some configuration settings for the app.

One of which was "region": "us-west-2" so the presigned url was created with this region but when it was called on the front end the region was set to "us-west-1".

Changing it to be the same fixed the issue.

Ju66ernaut
  • 2,592
  • 3
  • 23
  • 36
2

If your s3 files are encrypted than make sure that your policy also access to encryption key and related actions.

Arun
  • 21
  • 1
  • Amazon S3 evaluates and applies bucket policies before applying bucket encryption settings. Even if you enable bucket encryption settings, your PUT requests without encryption information will be rejected if you have bucket policies to reject such PUT requests. Check your bucket policy and modify it if required. – Neoheurist Oct 21 '20 at 19:58
2

After banging my head for many hours with this same issue. I noticed that my account had a MFA setup , making the generation of the signed url with only the accessKeyId and secretAccesKey useless.

The solution was installing this https://github.com/broamski/aws-mfa

After running it , it asks to create a .aws/credentials file, where you must input your access id / secret and aws_mfa_device . The later will look something like

aws_mfa_device = arn:aws:iam::youruserid:mfa/youruser

The data can be found in your user in the aws console (Website)

After that you will find that credentials are populated with new keys with 1 week duration iirc.

Then simply generate a url again

AWS.config.update({ region: 'xxx' });
var s3 = new AWS.S3();

var presignedGETURL = s3.getSignedUrl('putObject', {
    Bucket: 'xxx',
    Key: 'xxx', //filename
    Expires: xxx, //time to expire in seconds,
    ContentType: 'xxx'
});

And this time it will work.

Remember to NOT pass any credentials to AWS.config , since they will be automatically picked from the .aws/credentials folder.

mouchin777
  • 1,428
  • 1
  • 31
  • 59
  • how do i not pass any credentials to AWS config? what should I pass in BasicAWSCredentials awsCredentials = new BasicAWSCredentials(accessKey, secretKey); ?? – K P Jul 28 '23 at 13:24
1

I had the same issue when i'm locally testing my lambda function its works but after deploy it didn't work. once i add the s3 full access to lambda function it worked.

ashen madusanka
  • 647
  • 1
  • 9
  • 15
0

I saw this problem recently when moving from a bucket that was created a while ago to one created recently.

It appears that v2 pre-signed links (for now) continue to work against older buckets while new buckets are mandated to use v4.

Revised Plan – Any new buckets created after June 24, 2020 will not support SigV2 signed requests, although existing buckets will continue to support SigV2 while we work with customers to move off this older request signing method.

Even though you can continue to use SigV2 on existing buckets, and in the subset of AWS regions that support SigV2, I encourage you to migrate to SigV4, gaining some important security and efficiency benefits in the process.

https://docs.amazonaws.cn/AmazonS3/latest/API/sigv4-query-string-auth.html#query-string-auth-v4-signing-example

Our solution involved updating the AWS SDK to use this by default; I suspect newer versions probably already default this setting.

https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-config-other.html#config-setting-aws-s3-usesignatureversion4

cwash
  • 4,185
  • 5
  • 43
  • 53
0

To allow a signed URL for an S3 PUT to also be downloadable by anyone, add:

    const s3Params = {
      Bucket, Key, ContentType,
      // This ACL makes the uploaded object publicly readable. You must also uncomment
      // the extra permission for the Lambda function in the SAM template.
      ACL: 'public-read'
    }

The ACL: 'public-read' at the end is key to allowing you to download after upload.

But in order to set ACLs on the new file from a signed URL, the caller must have s3:PutObjectACL permission so you'll also need to grant that permission to the URL signer:

        - Statement:
          - Effect: Allow
            Resource: (BUCKET_ARN)/*
            Action:
              - s3:putObjectAcl

where BUCKET_ARN is your bucket ARN, so something like:

  Resource: "arn:aws:s3:::My-Bucket-Name/*"

See this link for more.

I think it's also possible to just get away with only s3:PutObject if the whole bucket is marked public. This used to be easy to do (a checkbox) but now seems overly complex. However, I think you can just add the policy found in Step 2 at this link.

Appurist - Paul W
  • 1,291
  • 14
  • 22