14

this code

const file = require("fs").createWriteStream("./test.dat");
for(var i = 0; i < 1e7; i++){

    file.write("a");
}

gives this error message after running for about 30 seconds

<--- Last few GCs --->

[47234:0x103001400]    27539 ms: Mark-sweep 1406.1 (1458.4) -> 1406.1 (1458.4) MB, 2641.4 / 0.0 ms  allocation failure GC in old space requested
[47234:0x103001400]    29526 ms: Mark-sweep 1406.1 (1458.4) -> 1406.1 (1438.9) MB, 1986.8 / 0.0 ms  last resort GC in old spacerequested
[47234:0x103001400]    32154 ms: Mark-sweep 1406.1 (1438.9) -> 1406.1 (1438.9) MB, 2628.3 / 0.0 ms  last resort GC in old spacerequested


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x30f4a8e25ee1 <JSObject>
    1: /* anonymous */ [/Users/matthewschupack/dev/streamTests/1/write.js:~1] [pc=0x270efe213894](this=0x30f4e07ed2f1 <Object map = 0x30f4ede823b9>,exports=0x30f4e07ed2f1 <Object map = 0x30f4ede823b9>,require=0x30f4e07ed2a9 <JSFunction require (sfi = 0x30f493b410f1)>,module=0x30f4e07ed221 <Module map = 0x30f4edec1601>,__filename=0x30f493b47221 <String[49]: /Users/matthewschupack/dev/streamTests/...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: node::Abort() [/usr/local/bin/node]
 2: node::FatalException(v8::Isolate*, v8::Local<v8::Value>, v8::Local<v8::Message>) [/usr/local/bin/node]
 3: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/usr/local/bin/node]
 4: v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [/usr/local/bin/node]
 5: v8::internal::Runtime_AllocateInTargetSpace(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/local/bin/node]
 6: 0x270efe08463d
 7: 0x270efe213894
 8: 0x270efe174048
[1]    47234 abort      node write.js

whereas this code

const file = require("fs").createWriteStream("./test.dat");
for(var i = 0; i < 1e6; i++){

    file.write("aaaaaaaaaa");//ten a's
}

runs perfectly almost instantly and produces a 10MB file. As I understood it, the point of streams is that both versions should run in about the same amount of time since the data is identical. Even increasing the number of as to 100 or 1000 per iteration hardly increases the running time even and writes a 1GB file without any issues. Writing a single character per iteration at 1e6 iterations also works fine.

What's going on here?

Marcos Casagrande
  • 37,983
  • 8
  • 84
  • 98
schu34
  • 965
  • 7
  • 25
  • Wild guess. It could be that writing a single character at a time causes more memory allocs to resize the stream buffer, but running a 1e7 loop never gives the GC a chance to run or the writes a chance to get processed. – jfriend00 May 15 '18 at 20:43
  • 2
    I'm not sure why `1e7` tips it over the edge, but you should be able to avoid the OOM by using the `drain` event to respect backpressure. `file.write(...)` returns `false` if you need to wait before the next write. There is an example in the docs here: https://nodejs.org/api/stream.html#stream_writable_write_chunk_encoding_callback – Robbie May 15 '18 at 21:23
  • 1
    or you could pipe from a readable stream to the write stream and it will handle back pressure for you – Robbie May 15 '18 at 21:24
  • They're the same except one uses more memory which causes the error. The point of streams is to allow buffered I/O where you read in/out in buffered chunks to avoid memory overflow issues like this. – Alex W May 16 '18 at 01:04

1 Answers1

31

The out of memory error happens because you're not waiting for the drain event to be emitted, without waiting Node.js will buffer all written chunks until maximum memory usage occurs.

.write will return false if the internal buffer is greater than highWaterMark which defaults to 16384 bytes (16kb). In your code, you're not handling the return value of .write, and so the buffer is never flushed.

This can be tested very easily using: tail -f test.dat

When executing your script, you will see that nothing is being written on test.dat until the script finishes.

For 1e7 the buffer should be cleared 610 times.

1e7 / 16384 = 610

A solution is to check for .write return value and if false is returned, use file.once('drain') wrapped in a promise to wait until drain event is emitted

NOTE: writable.writableHighWaterMark was added in node v9.3.0

const file = require("fs").createWriteStream("./test.dat");

(async() => {

    for(let i = 0; i < 1e7; i++) {
        if(!file.write('a')) {
            // Will pause every 16384 iterations until `drain` is emitted
            await new Promise(resolve => file.once('drain', resolve));
        }
    }
})();

Now if you dotail -f test.dat you will see how data is being written while the script is still running.


As of why you get memory issues with 1e7 and not 1e6 we have to take a look into how Node.Js does the buffering, that happen at the writeOrBuffer function.

This sample code will allow us to have a rough estimate of the memory usage:

const count = Number(process.argv[2]) || 1e6;
const state = {};

function nop() {}

const buffer = (data) => {
    const last = state.lastBufferedRequest;
    state.lastBufferedRequest = {
      chunk: Buffer.from(data),
      encoding: 'buffer',
      isBuf: true,
      callback: nop,
      next: null
    };

    if(last)
      last.next = state.lastBufferedRequest;
    else
      state.bufferedRequest = state.lastBufferedRequest;

    state.bufferedRequestCount += 1;
}

const start = process.memoryUsage().heapUsed;
for(let i = 0; i < count; i++) {
    buffer('a');
}
const used = (process.memoryUsage().heapUsed - start) / 1024 / 1024;
console.log(`${Math.round(used * 100) / 100} MB`);

When executed:

// node memory.js <count>
1e4: 1.98 MB
1e5: 16.75 MB
1e6: 160 MB
5e6: 801.74 MB
8e6: 1282.22 MB
9e6: 1442.22 MB - Out of memory
1e7: 1602.97 MB - Out of memory

So each object uses ~0.16 kb, and when doing 1e7 writes without waiting for drain event, you have 10 million of those objects in memory (To be fair it crashes before reaching 10M)

It doesn't matter if you use a single a or 1000, the memory increase from that is negligible.


You can increase the max memory used by node with --max_old_space_size={MB} flag (Of course this is not the solution, is just for checking the memory consumption without crashing the script):

node --max_old_space_size=4096 memory.js 1e7

UPDATE: I made a mistake on the memory snippet which led to a 30% increase on memory usage. I was creating a new callback for every .write, Node reuses nop callback.


UPDATE II

If you're writing always the same value (doubtful in a real scenario), you can reduce greatly the memory usage & execution time by passing the same buffer every time:

const buf = Buffer.from('a');
for(let i = 0; i < 1e7; i++) {
    if(!file.write(buf)) {
        // Will pause every 16384 iterations until `drain` is emitted
        await new Promise(resolve => file.once('drain', resolve));
    }
}
Marcos Casagrande
  • 37,983
  • 8
  • 84
  • 98
  • 2
    This is awesome, clears up a lot. Thanks for the details. now I understand how highWaterMark works, but why *doesn't* writing 1000 bytes of data at 1e6 iterations cause a crash? – schu34 May 16 '18 at 01:30
  • 3
    Because you're just writing 10mb of data in total, that will take up only 10mb of memory. In your case, the memory exhausted is not caused by the data being written, it is caused of how node stores your chunks. If you're writing 1 byte of data, that chunk in the buffer will take ~220 bytes. – Marcos Casagrande May 16 '18 at 01:41
  • 2
    @schu34 I made a mistake about callback, since it's not creating a new callback every `.write` it retulizes `nop` which is declared once. I updated my answer to reflect that. – Marcos Casagrande May 16 '18 at 03:52
  • 1
    @schu34 updated the script, so it will be faster, previously every write was inside a promise, making every write to be executed in a different tick of the event loop. Now it will wait only if needed. – Marcos Casagrande Jun 09 '18 at 13:46