1

I'm trying to find the max old space size the node process has.

First I tried using the heapTotal from process.memoryUsage() but:

  1. This contains the entire heap and not the max old space size (see here for more the difference)
  2. I can't run it every 0 ms as it will miss memory allocated and garbage collection happen in sync operations (such as fs.readFileSync(...))...

So my proposed solution that I don't know if it's right:

If I run the node process with the v8 flag --trace_gc_verbose (which print more details following each garbage collection) which will output something like:

[7515:0x118008000] Memory allocator,       used:   5400 KB, available: 4238056 KB
[7515:0x118008000] Read-only space,        used:    146 KB, available:      0 KB, committed:    148 KB
[7515:0x118008000] New space,              used:    212 KB, available:    810 KB, committed:   2048 KB
[7515:0x118008000] New large object space, used:      0 KB, available:   1022 KB, committed:      0 KB
[7515:0x118008000] Old space,              used:   1914 KB, available:    202 KB, committed:   2204 KB
[7515:0x118008000] Code space,             used:     85 KB, available:      0 KB, committed:    352 KB
[7515:0x118008000] Map space,              used:    275 KB, available:      0 KB, committed:    516 KB
[7515:0x118008000] Large object space,     used:    128 KB, available:      0 KB, committed:    132 KB
[7515:0x118008000] Code large object space,     used:      0 KB, available:      0 KB, committed:      0 KB
[7515:0x118008000] All spaces,             used:   2763 KB, available: 4240091 KB, committed:   5400 KB
[7515:0x118008000] Unmapper buffering 0 chunks of committed:      0 KB
[7515:0x118008000] External memory reported:     21 KB
[7515:0x118008000] Backing store memory:   1013 KB
[7515:0x118008000] External memory global 0 KB
[7515:0x118008000] Total time spent in GC  : 2.1 ms
[7515:0x118008000]       53 ms: Scavenge stack scanning: survived_before= 155KB, survived_after= 850KB delta=81.7%
[7515:0x118008000] Fast promotion mode: false survival rate: 41%
[7515:0x118008000]       54 ms: Scavenge 3.4 (7.3) -> 3.3 (7.5) MB, 0.7 / 0.0 ms  (average mu = 1.000, current mu = 1.000) allocation failure 

And I extract the following line from the output:

[7515:0x118008000] Old space,              used:   1914 KB, available:    202 KB, committed:   2204 KB

and sum the used (e.g. 1914 KB) and the available (e.g. 202 KB)

Will this be the max old space size my node process had?

Raz Luvaton
  • 3,166
  • 4
  • 21
  • 36

1 Answers1

0

Will this be the max old space size my node process had?

No. V8 gives unused pages back to the operating system. I don't think there's a way to retroactively get the largest size that the heap (or old space) ever had, so you'll have to keep watching it (e.g. via --trace-gc-verbose) for the lifetime of the process and keep track of the largest value you've seen.

This can be verified with simple test. For example:

let big = [];
for (let i = 0; i < 10000; i++) {
  big.push(new Array(4*1024));  // About 16 KB.
}
gc();  // Old space now contains ~320 MB.
big = null;
console.log("Old space size should drop now");
gc();

If you run that with: node --expose-gc --trace-gc-verbose test.js | grep "Old space" then the last three lines of output will be something like:

[...] Old space,   used: 323146 KB, available:   91 KB, committed: 368760 KB
Old space size should drop now
[...] Old space,   used:   2685 KB, available:  221 KB, committed:   3960 KB

So clearly, the last line gives no indication that a maximum of ~320 MB was reached previously.
(If Node ever turns on pointer compression, or if you run this in Chrome or d8, then peak memory usage will be cut in half to ~160 MB; that doesn't change the key point here.)


Side note: I'm confused by this question, because I have no idea what this value you're after might possibly be useful/relevant for. If you want to monitor memory consumption of your app in production, it would make more sense to look at all spaces, not just old space. If you want to determine how to configure the max old space size to a reasonable value, then I don't think this approach will give a good answer to that, and you might rather be interested in this question.

jmrk
  • 34,271
  • 7
  • 59
  • 74
  • Thank you, I needed this to track memory consumption in build so I track the values for the lifetime of the process, and I needed this to know if I got the memory usage lower as it keeps crashing on the CI – Raz Luvaton Mar 26 '22 at 16:59
  • @RazLuvaton in that case I would recommend simply `--trace-gc` (not `...-verbose`), as it doesn't matter _which_ space uses the memory. Do note that results may not be stable over Node versions, because engine changes affect memory consumption (mostly due to the many tough tradeoffs involved: sometimes making one thing use less memory makes another use more; sometimes reducing CPU load increases memory consumption or vice versa; engines' decisions and techniques to deal with these tradeoffs change over time). – jmrk Mar 26 '22 at 19:36
  • The space matters as it crash on "FATAL ERROR: MarkCompactCollector: young object promotion failed Allocation failed - JavaScript heap out of memory" and increasing --max-old-space-size fixes the problem – Raz Luvaton Mar 26 '22 at 20:10
  • @RazLuvaton : The space does *not* matter. For example, if you modify my test case to allocate arrays of `128*1024` (instead of `4*1024`) elements each, it'll crash because of OOM when "Old space" is at less than 3 MB but "Large object space" exceeds 4 GB. (With Node default settings, i.e. "max old space size" = 4GB.) Trying to focus on just "old space" is unnecessarily complicated _and_ will give you incorrect/misleading results. – jmrk Mar 26 '22 at 21:45
  • thank you, so what do you suggest? your linked answer is not the right answer for me because I need to **debug** the memory usage in order to lower it _and_ I don't want to set the old space size to relative size based on my machine as my machine has more than that but I need it for other jobs to run... Thank you for your patience – Raz Luvaton Mar 27 '22 at 08:30
  • To debug memory usage, try e.g. https://nodesource.com/blog/memory-leaks-demystified or any of the other results of https://www.google.com/search?q=debug+node.js+memory+usage . – jmrk Mar 27 '22 at 14:48
  • Yeah, I'm familiar with the chrome memory tab but the node process for some reason crashed out of segmentation fault when recording the heap allocations this is why I did it that way... – Raz Luvaton Mar 27 '22 at 16:51