55

I am unsuccessfully trying to write to the file system of an aws lambda instance. The docs say that a standard lambda instance has 512mb of space available at /tmp/. However the following code that runs on my local machine isn't working at all on the lambda instance:

  var fs = require('fs');
  fs.writeFile("/tmp/test.txt", "testing", function(err) {
      if(err) {
          return console.log(err);
      }
      console.log("The file was saved!");
  });

The code in the anonymous callback function is never getting called on the lambda instance. Anyone had any success doing this? Thanks so much for your help.

It's possible that this is a related question. Is it possible that there is some kind of conflict going on between the s3 code and what I'm trying to do with the fs callback function? The code below is what's currently being run.

console.log('Loading function');

var aws = require('aws-sdk');
var s3 = new aws.S3({ apiVersion: '2006-03-01' });
var fs = require('fs');

exports.handler = function(event, context) {
    //console.log('Received event:', JSON.stringify(event, null, 2));

    // Get the object from the event and show its content type
    var bucket = event.Records[0].s3.bucket.name;
    var key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
    var params = {
        Bucket: bucket,
        Key: key
    };
    s3.getObject(params, function(err, data) {
        if (err) {
            console.log(err);
            var message = "Error getting object " + key + " from bucket " + bucket +
            ". Make sure they exist and your bucket is in the same region as this function.";
            console.log(message);
            context.fail(message);
        } else {

            //console.log("DATA: " + data.Body.toString());
            fs.writeFile("/tmp/test.csv", "testing", function (err) {

                if(err) {
                    context.failed("writeToTmp Failed " + err);
                } else {
                    context.succeed("writeFile succeeded");
                }
            });
        }
    });
};
Community
  • 1
  • 1
Rymnel
  • 4,515
  • 3
  • 27
  • 28
  • 2
    my understanding of aws-lambda is that the code runs in response to some `event` - is the `event` that piece of code "responds" to even being triggered? – Jaromanda X Jan 26 '16 at 04:14
  • @JaromandaX Yes that is correct. The event is getting fired successfully. This is part of a larger set of code that is trying to write a CSV file that is obtained from S3 and then convert it to Json and upload it again to another s3 bucket so that it can be ingested into a dynamoDB via another lambda instance. Everything is working but this piece... – Rymnel Jan 26 '16 at 04:22
  • 1
    you're using `console.log` - is console.log output available to you in some way (I don't use aws-lambda which is why I ask) - if so, does a console.log BEFORE the fs.writeFile get executed? – Jaromanda X Jan 26 '16 at 04:25
  • 1
    Just out of curiosity, why write to the file-system on a lambda? Feels a bit un-lambda-ish... care to share the use-case? – Assaf Lavie Jan 26 '16 at 04:49
  • @AssafLavie I'm probably handling the whole problem wrong but I'm just trying to solve it as I know how. It's not production code but more a cut and past learning experience. I feel like I'm trying to solve a related problem to one that I've already asked. You can see the code mostly in its entirety here: http://stackoverflow.com/questions/34984995/how-can-i-get-the-npm-csvtojson-to-parse-from-a-string Here is the code that I am testing right now and even though the AWS event is firing, the code to write the file isnt: https://gist.github.com/ShanePerry/d09e61038520d91a9f37 – Rymnel Jan 26 '16 at 14:59
  • Yea you should definitely not write the CSV to a file just to parse it. You want to do that in memory. I highly recommend you avoid chasing this file-system issue since it doesn't feel like the right approach. Sorry I can't help more. – Assaf Lavie Jan 27 '16 at 07:43
  • 1
    I think the question quite relevant; there are certainly cases whereby writing to the file-system is required (rather than in-memory). I think addressing the specific problem posted is useful. – onassar Sep 09 '21 at 14:49
  • @AssafLavie - You don't always have control of your libraries. Some require writing to disk. – lowcrawler Feb 19 '22 at 04:52

3 Answers3

23

Modifying your code into the Lambda template worked for me. I think you need to assign a function to exports.handler and call the appropriate context.succeed() or context.fail() method. Otherwise, you just get generic errors.

var fs = require("fs");

exports.handler = function(event, context) {
    fs.writeFile("/tmp/test.txt", "testing", function (err) {
        if (err) {
            context.fail("writeFile failed: " + err);
        } else {
            context.succeed("writeFile succeeded");
        }
    });
};
James
  • 11,721
  • 2
  • 35
  • 41
  • Thanks for the suggestion. I've modified my code and included more context but still my callback function isnt getting called. I suspect there is some kind of conflict going on between the s3.getObject function and what I'm trying to do. I saw your answer here http://stackoverflow.com/questions/30927172/how-to-upload-an-object-into-s3-in-lambda and wonder if it doesnt relate somehow to my question. – Rymnel Jan 26 '16 at 16:03
16

So the answer lies in the context.fail() or context.succeed() functions. Being completely new to the world of aws and lambda I was ignorant to the fact that calling any of these methods stops execution of the lambda instance.

According to the docs:

The context.succeed() method signals successful execution and returns a string.

By eliminating these and only calling them after I had run all the code that I wanted, everything worked well.

Rymnel
  • 4,515
  • 3
  • 27
  • 28
  • Could you please paste your successful code? I'm hitting this same issue and having trouble writing an HTML file in my Lambda which I then want to put in an S3 bucket. – bildungsroman Mar 05 '19 at 00:59
  • Just dont save it. Forward it to putObject method directly from memory. – Pavelloz Jun 17 '19 at 09:27
  • @Pavelloz - is there code example somewhere for doing that? – lowcrawler Nov 30 '21 at 19:14
  • Hmm, I dont know. I wrote it myself when i had to unzip big files into S3 bucket without running out of memory. Turned out streams were a lot more effective in that. Heres a snippet from my codebase (i removed some code from it so it wont work out of the box, but forwarding stream is complete as far as i see), hopefully you can make sense of that: https://gist.github.com/pavelloz/91b2d140134635b3c9ad8aaa85549ff7 – Pavelloz Dec 01 '21 at 20:16
1

I ran into this, and it seems like AWS Lambda may be using an older (or modified) version of fs. I figured this out by logging the response from fs.writeFile and noticed it wasn't a promise.

To get around this, I wrapped the call in a promise:

var promise = new Promise(function(resolve, reject) {
    fs.writeFile('/tmp/test.txt', 'testing', function (err) {
        if (err) {
            reject(err);
        } else {
            resolve();
        }
    });
});

Hopefully this helps someone else :hug-emoji:

onassar
  • 3,313
  • 7
  • 36
  • 58