0

Currently I have a module pulling sql results like this:

[{ID: 'test', NAME: 'stack'},{ID: 'test2', NAME: 'stack'}]

I want to just literally have that written to file so i can read it as an object later, but i want to write it by stream because some of the objects are really really huge and keeping them in memory isnt working anymore.

I am using mssql https://www.npmjs.org/package/mssql

and I am stuck at here:

    request.on('recordset', function(result) {
        console.log(result);
    });

how do I stream this out to a writable stream? I see options for object mode but i cant seem to figure out how to set it?

    request.on('recordset', function(result) {
        var readable = fs.createReadStream(result),
            writable = fs.createWriteStream("loadedreports/bot"+x[6]);
        readable.pipe(writable);
    });

this just errors because createReadStream must be a filepath...

am I on the right track here or do I need to do something else?

user2457035
  • 81
  • 10

1 Answers1

1

You´re almost on the right track: You just dont need a readable stream, since your data already arrives in chunks.

Then, you can just create the writeable stream OUTSIDE of the actual 'recordset'-Event, else you would create a new stream everytime you get a new chunk (and this is not what you want).

Try it like this:

 var writable = fs.createWriteStream("loadedreports/bot"+x[6]);
 request.on('recordset', function(result) {
    writable.write(result);
 });

EDIT

If the recordset is already too big, use the row-Event:

   request.on('row', function(row) {
   // Same here
   });
David Losert
  • 4,652
  • 1
  • 25
  • 31
  • I am receiving a type-error: Invalid non-string/buffer chunk – user2457035 Aug 04 '14 at 16:10
  • I added JSON.stringify(result) and then used that as my writable.write(stringified) and this seems to work. Obviously its changing my object but this will do I suppose. Now I gotta change how I am reading the file now. Thanks! – user2457035 Aug 04 '14 at 16:51
  • Well thats weird... Did you enable streaming by setting the option ` stream: true` (I guess so, else it probably wouldnt work)? Are you using the resultset or row-Event? Do you listen on the `done`-Event and then close the writestream? – David Losert Aug 04 '14 at 20:46
  • That I did not do. I didnt know to listen to done and close the writestream. Something like this? request.on('done', function(end) { writable.close(end); }); – user2457035 Aug 05 '14 at 12:44
  • Actually, reading this, I dont need to use fs.close. http://stackoverflow.com/questions/21176733/what-the-scenario-call-fs-close-is-necessary What could be the problem then? I guess the MSSQL stream is still keeping each row after its written still. I need to drain the MSSQL part – user2457035 Aug 05 '14 at 12:46
  • Noticed a couple of other things, writing on each row results in not a true json object. It doesnt finish by adding the [] around my objects nor the ,'s. I guess I need to write my own parser? Additionally, requent.on('done') emits after the SQL query is finished, not after the stream is finished. I dont think I can solve the memory issue at this level. I might have to end up scrapping it or making my data smaller. – user2457035 Aug 05 '14 at 13:30
  • Hm im also out of ideas at that point, sorry. Maybe you can file an issue at the MSSQL-Github-Repo about this? – David Losert Aug 05 '14 at 20:43