2

I'm aware of different system allocators and now I'm trying to play around them in the context of JVM. The question is if it is possible to tell hotspot which allocator to use (e.g. I want JVM does all allocations with tcmalloc, or probably simply call mmap/munmap every time).

Maybe there is a JVM option?

Some Name
  • 8,555
  • 5
  • 27
  • 77
  • The whole heap is allocated once as a continuous block of memory. What do you hope to achieve by this? – Peter Lawrey Dec 27 '18 at 16:55
  • @PeterLawrey so it means the whole heap is allocated once at startup? – Some Name Dec 27 '18 at 17:04
  • Correct, your main options are how much of it uses to start with, and whether the pages are preinitialised or not. – Peter Lawrey Dec 27 '18 at 17:09
  • @PeterLawrey Probably I started with the wrong thing. The main point is to investigate too high native memory consumption. `jcmd VM.Native_memory` shows up the size that is much smaller than the `RSS`. I looked at `pmap -X` and there were lots of anonymous pages which I guessed were produced by a system allocator. Maybe u can give some advice or a direction to look toward? – Some Name Dec 27 '18 at 17:15
  • 1
    There could be a lot of causes. They could be stacks for all the threads. If you have pairs of mapping which all look the same these can be stacks which are typically not resident. I would have a look at `/proc/{pid}/smap` for more details esp which regions are resident. – Peter Lawrey Dec 27 '18 at 17:19
  • @PeterLawrey I am not sure that java threads map to native threads 1-to-1. The thing is my application creates 4 threads every 30 seconds. But they are terminated succesfully and the new ones get created again in 30 seconds so there is no thread leak. `pthreads` releases their resource when joining, but can sort of caching happen to java thread? – Some Name Dec 27 '18 at 17:27
  • See [this](https://stackoverflow.com/a/53624438/3448419) and [this](https://stackoverflow.com/a/53598622/3448419) answers on how to analyze native memory of a Java process. – apangin Dec 27 '18 at 21:34
  • @apangin I checked `NMT` returned by `jcmd` and currently I have total `Total: reserved=1646677KB, committed=309929KB`, but RSS = `457749`. Can 150Mb (1/3 of the total RSS) be an allocator overhead? Looks crazy... – Some Name Dec 28 '18 at 04:43
  • `malloc` can easily commit gigabytes of memory without releasing it back to the OS even after `free`. – apangin Dec 28 '18 at 11:29
  • @apangin Unfortunately I'm not familar with `malloc` implementation, but is it possible to distinguish `malloc` overhead? By chunk size (most of the chunks has size `64 MB` or close to it) or by some meta-info from `binary dump`? – Some Name Dec 28 '18 at 15:32
  • 1
    Try jemalloc with or without profiling feature. – apangin Dec 28 '18 at 15:35

1 Answers1

1

jcmd VM.Native_memory shows up the size that is much smaller than the RSS

I would look at /proc/{pid}/smap to see which regions are resident. You can have a lot of virtual memory used by threads stacks which are typically not resident.

If the java program has allocated a lot of off heap memory you will have DirectByteBuffer objects to wrap each memory region.

Peter Lawrey
  • 525,659
  • 79
  • 751
  • 1,130
  • Interesting. All IO in the application is being performes via `DirectByteBuffer`. But I thought it was counted by `jcmd VM.native_memory`. – Some Name Dec 27 '18 at 17:31
  • @SomeName yes, but this doesn't include the Java heap, how much is that? – Peter Lawrey Dec 27 '18 at 19:27
  • I turned out that the direct memory is included in the `Internal` section. Currently I have `Internal (reserved=49394KB, committed=49394KB) Java Heap (reserved=131072KB, committed=42496KB)` which is pretty reasonable. Probably I need to analyze `binary dump` of these anonymous regions to understand what they are... – Some Name Dec 28 '18 at 04:48