Answer to an old post, but maybe it helps someone. I have looked for a solution on forums and ultimately found it in the docs for aws-sdk. Well, a couple of hours of try&fail can save you several minutes of reading the docs or READMEs. Anyway. First, I use s3.upload instead of s3.putObject. Afterwards, the function is asynchronous and if the lambda function terminates earlier, than the upload, there won't be any result nor any log whatsoever. The prettiest solution I came up with is:
const fs = require('fs');
const s3 = new AWS.S3()
const file = fs.readFileSync(someFilePath)
const bucket = "..."
const s3key = "..."
const uploadParams = {
Bucket: bucket,
Key: s3key,
Body: file
};
//executes the upload and waits for it to finish
await s3.upload(uploadParams).promise().then(function(data) {
console.log(`File uploaded successfully. ${data.Location}`);
}, function (err) {
console.error("Upload failed", err);
})
//code continues synchronously here
...
return whatEver;
Alternatively, if you have and want to have an async handler, you can return the Promise itself. However, doing so, you will have less control of what happens inside the resolve and reject callbacks. I.e. console.log I had placed inside them did not write the logs into lambda's console along with the other logs in the handler, that were called outside of the callbacks (before the upload).