1

I have a NodeJS application running on a k8s pod. The actual size of the pod is 2GB, but in environment variables, we set this value to 4GB --max-old-space-size=4096 (which will not be true in my case - for some tenants we do allocate 4GB but most pods have 2GB).

Now I tried 2 ways to detect the memory usage and the total memory and both are providing different stats.

  1. I'm fetching memory usage from this system file: /sys/fs/cgroup/memory/memory.usage_in_bytes and total memory from this file : /sys/fs/cgroup/memory/memory.limit_in_bytes

limit_in_bytes is returning 2GB correctly, but the value of usage_in_bytes is fluctuating too much, It's around 1gb in few mins and spikes to 2GB in next minute even though nothing changed in that minute(no stress on the system).

Stats of a process

Memory Usage POD: 2145124352
shlog - memLimit 214748364
  1. The second option I tried is using this V8 built in node js library to get heap statistics : https://nodejs.org/api/v8.html#v8getheapstatistics. Usage:

     const initialStats = v8.getHeapStatistics();
     console.log("heap_size_limit: ", (initialStats.heap_size_limit)); // total memory
     console.log("total_heap_size: ", (initialStats.total_heap_size)); // current usage
    

Now in total memory, it's return 4G, which is not right in my case. but the current usage here seems much true.

Stats of the same process

total_heap_size: 126312448,
heap_size_limit: 4320133120,

Complete response of v8 getHeapStatistics method:

HeapStats:  {
    total_heap_size: 126312448,
    total_heap_size_executable: 1097728,
    total_physical_size: 124876920,
    total_available_size: 4198923736,
    used_heap_size: 121633632,
    heap_size_limit: 4320133120,
    malloced_memory: 73784,
    peak_malloced_memory: 9831240,
    does_zap_garbage: 0,
    number_of_native_contexts: 1,
    number_of_detached_contexts: 0
}

My goal is to detect the memory usage according to the total memory of the pod, and so some throttling when the memory consumption reached 85%. I'm willing to use the first method but please tell me why there is so much difference in the memory usage and how to get the accurate memory usage of the pod.

Really looking forward to get some help on this. Thank you.

ShahtajK
  • 117
  • 10
  • Does this help? https://stackoverflow.com/questions/48387040/ – jmrk Nov 13 '21 at 17:48
  • Both numbers are probably right, but measure different things; looking at that `v8` documentation I might expect `total_physical_size` or `malloced_memory` to be closer to the cgroups allocation statistics. Are you specifically trying to measure Node heap memory (as distinct from other memory that might be allocated by Node), or just have an abstract "85% of available memory" measurement? Instead of throttling yourself, could you set up a HorizontalPodAutoscaler to create more pods? – David Maze Nov 13 '21 at 19:03
  • @DavidMaze I have updated my question with `total_physical_size` and `malloced_memory` of the same process, please check. I'm trying to get the current memory usage of the pod (will check this before running some processes). And no, can't create more pods, we only have a single pod and needs to implement the throttling ourself. – ShahtajK Nov 13 '21 at 19:39

1 Answers1

2

To get overall memory consumption of a process, look to (and trust) the operating system's facilities.

Node's v8.getHeapStatistics tell you about the managed (a.k.a. garbage-collected) heap where all the JavaScript objects live. But there can be quite a bit of other, non-garbage-collected memory in the process, for example Node buffers and certain strings, and various general infrastructure that isn't on the managed heap. In the average Chrome renderer process, the JavaScript heap tends to be around a third of total memory consumption, but with significant outliers in both directions; for Node apps it depends a lot on what your app is doing.

Setting V8's max heap size (which, again, is just the garbage-collected part of the process' overall memory usage) to a value larger than the amount of memory available to you doesn't make much sense: it invites avoidable crashes, because V8 won't spend as much time on garbage collection when it thinks there's lots of available memory left, but the OS/pod as a whole could already be facing an out-of-memory situation at that point. That's why I linked the other answer: you very likely want to set the max heap size to something a little less than available memory, to give the garbage collector the right hints about when to work harder to stay under the limit.

jmrk
  • 34,271
  • 7
  • 59
  • 74
  • Right, so setting the `max-old-space-size` to `1536` will limit the OOM issues overall? also, what will be the impacts if we assign the same limit of `1536` for pod with 4GB memory? Also, for implementing the throttling by ourself, using memory usage returned by OS here `/sys/fs/cgroup/memory/memory.usage_in_bytes` is the best option then. Thank you so much for the detailed explanation. – ShahtajK Nov 14 '21 at 09:42
  • If you have 4GB available, then you probably want to let the JavaScript heap use more than 1.5GB of that. – jmrk Nov 14 '21 at 14:39