3

I'm using the aws-sdk to upload files to s3. I'm configuring my credentials with

aws.config.update({
  accessKeyId: aws.config.credentials.accessKeyId,
  secretAccessKey: aws.config.credentials.secretAccessKey,
  region: 'us-east-1'
});

Then uploading with multer-s3:

const s3 = new aws.S3();

const upload = multer({
  storage: multerS3({
    s3: s3,
    bucket: 'my-bucket-v1',
    acl: 'public-read',
    contentType: multerS3.AUTO_CONTENT_TYPE,
    key: function (req, file, cb) {
      const today = new Date();
      cb(null, file.originalname)
      console.log("file\n", file);
    }
  })
}).array('upl', 1);

router.post('/api/upload', (req, res, next) => {
  upload(req, res, err => {
    if (err) return console.log("err\n", err);
    res.status(201).send();
  })
});

The error I keep getting is with my Access Key Id: "The AWS Access Key Id you provided does not exist in our records."

I've created multiple new access keys in my aws account, but nothing works. I'm using root user access keys (I tried an IAM user, and it still didn't work).

I also logged my aws credentials in my node server (console.log(s3)), and it matches what's in my aws security credentials.

How do I properly configure my aws credentials to upload to s3?

Harshal Yeole
  • 4,812
  • 1
  • 21
  • 43
Farid
  • 1,557
  • 2
  • 21
  • 35

4 Answers4

4

Finally figured it out. I had to set the access keys where I was initializing s3 before using it in the multer object.

const s3 = new aws.S3({
  accessKeyId: ACCESS_KEY_ID // Set access key here
  secretAccessKey: SECRET_ACCESS_KEY,
});

const upload = multer({
  storage: multerS3({
    s3: s3, // Use s3 instance here
    bucket: 'my-match',
    acl: 'public-read',
    contentType: multerS3.AUTO_CONTENT_TYPE,
    key: function (req, file, cb) {
      const today = new Date();
      cb(null, file.originalname)
      console.log("file\n", file);
    }
  })
}).array('upl', 1);

I still don't understand why the previous implementation doesn't work anymore, but at least this way works.

Farid
  • 1,557
  • 2
  • 21
  • 35
1

Seems like AWS is not able to find the accessKey you are providing to the AWS config.

Verify your config.

aws.config.update({
  accessKeyId: aws.config.credentials.accessKeyId,
  secretAccessKey: aws.config.credentials.secretAccessKey,
  region: 'us-east-1'
});

Read more about accessKeyId and secretAccessKey here:

https://aws.amazon.com/blogs/security/wheres-my-secret-access-key/

Harshal Yeole
  • 4,812
  • 1
  • 21
  • 43
  • Config seems correct. Could IP address have anything to do with the problem? I moved recently, and everything worked fine at my last residence. – Farid May 20 '20 at 11:09
  • AccessId is not bound to ip address, can you log those and compare both? – Harshal Yeole May 20 '20 at 11:33
  • Yes, the logs confirm they match what's in my aws account. I even hardcoded the keys, and still getting that error. The cli works, which is weird (`aws s3 ls` returns a list of my buckets). AWS tech support wrote "The error you're experiencing is due to the previous command to reach the S3 bucket being authorized via the exposed root key ROOT_KEY. Since this key was deleted, it is now blocking the access to your bucket as it no longer exists." I had to delete my old keys and create new ones. I don't know how to grant the new root key access to that bucket. – Farid May 20 '20 at 12:49
  • One important note: uploading a file using the aws cli does work: `aws s3 cp IMG_0041.jpg s3://my-match-v1/`. I can see this file in my bucket, and my aws config is using the same keys. It's just not working from my node server. – Farid May 20 '20 at 13:20
  • Check if you are initializing AWS somewhere else? – Harshal Yeole May 20 '20 at 15:19
0

maybe the solution applied to aws educate accounts only

Adding the access token makes it work

Note update your credentials and bucket variables in code

import fs from 'fs';
import AWS from 'aws-sdk';


const aws_access_key_id="ASICDENCJVVNVHV"
const aws_secret_access_key="xrLcJTh5xHsNRHFRFMFRFTRygO7evgyTZCuJ7KDa7HNgMg"

const s3 = new AWS.S3({
    accessKeyId: aws_access_key_id,
    secretAccessKey: aws_secret_access_key,
    sessionToken:"IQoJb3JpZ2luX2VjEKv//////////wEaCXVzLXdlc3QtMiJHMEUCIDRIZOebUzz0+HfsimpnMGQp27oQWoByJxZhXFR34mgPAiEAwkUHMSf0iac3p/8VaEnjrfruuUkt8mf2m4cWfc23lj3QqrwIIQxABGgwzMzMwODQ3NzM0NDciDMpDh4tfMrUzU9EikyqMAoACnGKcTtOWbVxYsnaRIY0jve4YvyEvFJlk+RUl2/Tp1wnHvQF+oyQlXEQ1NIvQCedEV0ThLm6fePZrr+23qMEg7DHyZZ2q+UYubMIFwj4jkL4OF4vqluXPJz/vMnqTe+n+vcgioY2/quQWpBNVMBWG9egIQaevXzd7QStzVznEgIhZ3OMLEABWMZHE3ipY5vhwvWf6thLcwTHeb65l3wAUOmG89102WerI/a4nJ5Ivye4w2EVTA4laeSSRFmq0u20FwIx4xUSrDySjurBqJD0VIhQh7XcY4bsBc8XavubKq0JzmOC993N5Hr2JLb2N+KiVIW+r6qDmkPCNso4ndJhtC/YTX5v07zMbdgcwuYeJhQY6nQEqeEXY9FmVM2HXFqLmIwYilafVQSFO5gP9VmS8BhylIUmyzN/Uplg8V/f06b6Xhnm1MY28UGe27ALE/dO8U6muwjIxGBB33Albtr1BtBDk7yWplpayE9NBlAJigIxAQy2OqNUfrbJtzD4aFBUlC0r/enzLO4PrPq/rOkvfR4di87kpcfJX/zPLkNHTXEdzz/2boCd69/uMIwi611yP"
});

// : MulterFile[])
const uploadToS3 = async (ls) => {
    fs.readFile('/home/hari/learn/shakti/shaktiserver/src/image-upload/ram.png', (err, data) => {
        if (err) throw err;
        const Body = JSON.stringify(data, null, 2);
        s3.upload({
            Bucket: AWS_S3_BUCKET_NAME, // pass your bucket name
            Key: 'ram.png', // file will be saved as testBucket/contacts.csv
            ACL: 'public-read',
            Body: Body
        }, (e, data) => {
            if (e) throw e
            console.log({ data })
            console.log(`File uploaded successfully at ${data.Location}`)
        })

    })
}
uploadToS3([])
Chetan Jain
  • 236
  • 6
  • 16
0

TRY THIS

An error occurred (InvalidAccessKeyId) when calling the CreateBucket operation: The AWS Access Key Id you provided does not exist in our records.

One of the reasons for getting the above error is when the aws default region setted through the aws configure is different from the actual account region.

So check the default region in the aws configure and the account region on the concole.

hkniyi
  • 263
  • 3
  • 7