9

JDK version is hotspot 8u_45

I researched native memory of my java process. The native memory even consumes more space than heap. However there are many native memory blocks which confuses me. The result of pmap -x for example:

00007f8128000000   65508   25204   25204 rw---    [ anon ]
00007f812bff9000      28       0       0 -----    [ anon ]
00007f812c000000   65508   24768   24768 rw---    [ anon ]
00007f812fff9000      28       0       0 -----    [ anon ]
00007f8130000000   65508   25532   25532 rw---    [ anon ]
00007f8133ff9000      28       0       0 -----    [ anon ]
00007f8134000000   65524   22764   22764 rw---    [ anon ]
00007f8137ffd000      12       0       0 -----    [ anon ]
00007f8138000000   65508   26456   26456 rw---    [ anon ]
00007f813bff9000      28       0       0 -----    [ anon ]
00007f813c000000   65508   23572   23572 rw---    [ anon ]
00007f813fff9000      28       0       0 -----    [ anon ]
00007f8140000000   65520   23208   23208 rw---    [ anon ]
00007f8143ffc000      16       0       0 -----    [ anon ]
00007f8144000000   65512   23164   23164 rw---    [ anon ]
00007f8147ffa000      24       0       0 -----    [ anon ]
00007f8148000000   65516   23416   23416 rw---    [ anon ]
00007f814bffb000      20       0       0 -----    [ anon ]
00007f814c000000   65508   23404   23404 rw---    [ anon ]
00007f814fff9000      28       0       0 -----    [ anon ]
00007f8150000000   65512   24620   24620 rw---    [ anon ]
00007f8153ffa000      24       0       0 -----    [ anon ]
00007f8154000000   65536   23976   23976 rw---    [ anon ]
00007f8158000000   65508   23652   23652 rw---    [ anon ]
00007f815bff9000      28       0       0 -----    [ anon ]
00007f815c000000   65508   23164   23164 rw---    [ anon ]
00007f815fff9000      28       0       0 -----    [ anon ]
00007f8160000000   65508   23344   23344 rw---    [ anon ]
00007f8163ff9000      28       0       0 -----    [ anon ]
00007f8164000000   65508   24052   24052 rw---    [ anon ]
00007f8167ff9000      28       0       0 -----    [ anon ]
00007f8168000000  131052   48608   48608 rw---    [ anon ]
00007f816fffb000      20       0       0 -----    [ anon ]
00007f8170000000   65516   23056   23056 rw---    [ anon ]
00007f8173ffb000      20       0       0 -----    [ anon ]
00007f8174000000   65516   26860   26860 rw---    [ anon ]
00007f8177ffb000      20       0       0 -----    [ anon ]
00007f8178000000   65508   23360   23360 rw---    [ anon ]
00007f817bff9000      28       0       0 -----    [ anon ]
00007f817c000000   65536   24856   24856 rw---    [ anon ]
00007f8180000000   65512   23272   23272 rw---    [ anon ]
00007f8183ffa000      24       0       0 -----    [ anon ]
00007f8184000000   65508   23688   23688 rw---    [ anon ]
00007f8187ff9000      28       0       0 -----    [ anon ]
00007f8188000000   65512   24024   24024 rw---    [ anon ]
00007f818bffa000      24       0       0 -----    [ anon ]
00007f818c000000   65508   25020   25020 rw---    [ anon ]
00007f818fff9000      28       0       0 -----    [ anon ]
00007f8190000000   65512   22868   22868 rw---    [ anon ]
00007f8193ffa000      24       0       0 -----    [ anon ]
00007f8194000000   65508   24156   24156 rw---    [ anon ]
00007f8197ff9000      28       0       0 -----    [ anon ]
00007f8198000000   65508   23684   23684 rw---    [ anon ]

There are many blocks which occupy about 64M.

I use jcmd pid VM.native_memory detail to track these memory blocks. However, I cannot found these blocks with any of the memory ranges listed in the result of jcmd.

Furthermore, I noticed an article which mentions arena effect in malloc of glic Java 8 and Virtual Memory on Linux. However These blocks seem different from thread pool because 1. The mode is rw--- not ----- 2. The arena thread pool only affects virtual memory. It cannot explain these too much RSS.

I use gdb to track the allocated memory

dump binary memory mem.bin from to

mem.bin.1 enter image description here

mem.bin.2 enter image description here

mem.bin.3

enter image description here mem.bin.4

enter image description here

There are about 30 blocks like those shown in the picture.

After some days, I use Google perf tools to track heap allocations. And found this: enter image description here

It shows that: zip inflates consume nearly 2G memory. I guess it may concern with some compilation issue.

I have read this issue:https://bugs.openjdk.java.net/browse/JDK-8164293. Is this related to my concern?

So how can I track the source of these memory block?

Arghavan
  • 1,125
  • 1
  • 11
  • 17
chenatu
  • 827
  • 2
  • 10
  • 22
  • What kind of Java process is this? Are you doing any interop? – Jorn Vernee May 31 '17 at 12:39
  • @JornVernee This process is interop process of thrift and jetty – chenatu May 31 '17 at 12:58
  • I mean native interop. That idea I had is that memory allocated outside of the VM will not show up through `jcmd`, but will show up with `pmap`. – Jorn Vernee May 31 '17 at 13:04
  • @JornVernee Sorry I don't understand what is native interop. Can you please give me some example? – chenatu May 31 '17 at 13:07
  • For instance when you call a function from a native `.so`/`.dll` from Java. I don't know those libraries well enough to say if they do that though. – Jorn Vernee May 31 '17 at 13:11
  • That is a good point. I will track these down and reply to you. @JornVernee – chenatu May 31 '17 at 13:13
  • you could try [NMT](https://stackoverflow.com/q/31173374/1362755) to compare with pmap -x. Additionally look at a heap dump for direct byte buffers. – the8472 May 31 '17 at 22:38
  • NMT is not enough. pmax -x shows many memory blocks outside ranges listed in NMT by jcmd @the8472 – chenatu Jun 01 '17 at 02:25
  • That is unrelated to compilation for sure. As I've told, ZipInputStream/JarInputStream is a quite common source of such leaks. E.g. an application calls `Class.getResourceAsStream` but does not close the result stream. Create a heap dump to see who holds `java.util.zip.Inflater` objects. – apangin Jun 03 '17 at 13:11

1 Answers1

6

Use jemalloc or tcmalloc - they both have built-in allocation profiler that will help to identify the source of allocations.

Java process may use too much native memory for many reasons. Popular reasons are

  • Direct ByteBuffers
  • Memory allocated by Unsafe.allocateMemory
  • Unclosed resources (e.g. ZipInputStream)
  • other native libraries

Note that NativeMemoryTracking will not show memory consumed by native libraries.

apangin
  • 92,924
  • 10
  • 193
  • 247