10

I have a java application running on Java 8 inside a docker container. The process starts a Jetty 9 server and a web application is being deployed. The following JVM options are passed: -Xms768m -Xmx768m.

Recently I noticed that the process consumes a lot of memory:

$ ps aux 1
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
app          1  0.1 48.9 5268992 2989492 ?     Ssl  Sep23   4:47 java -server ...

$ pmap -x 1
Address           Kbytes     RSS   Dirty Mode  Mapping
...
total kB         5280504 2994384 2980776

$ jcmd 1 VM.native_memory summary
1:

Native Memory Tracking:

Total: reserved=1378791KB, committed=1049931KB
-                 Java Heap (reserved=786432KB, committed=786432KB)
                            (mmap: reserved=786432KB, committed=786432KB) 

-                     Class (reserved=220113KB, committed=101073KB)
                            (classes #17246)
                            (malloc=7121KB #25927) 
                            (mmap: reserved=212992KB, committed=93952KB) 

-                    Thread (reserved=47684KB, committed=47684KB)
                            (thread #47)
                            (stack: reserved=47288KB, committed=47288KB)
                            (malloc=150KB #236) 
                            (arena=246KB #92)

-                      Code (reserved=257980KB, committed=48160KB)
                            (malloc=8380KB #11150) 
                            (mmap: reserved=249600KB, committed=39780KB) 

-                        GC (reserved=34513KB, committed=34513KB)
                            (malloc=5777KB #280) 
                            (mmap: reserved=28736KB, committed=28736KB) 

-                  Compiler (reserved=276KB, committed=276KB)
                            (malloc=146KB #398) 
                            (arena=131KB #3)

-                  Internal (reserved=8247KB, committed=8247KB)
                            (malloc=8215KB #20172) 
                            (mmap: reserved=32KB, committed=32KB) 

-                    Symbol (reserved=19338KB, committed=19338KB)
                            (malloc=16805KB #184025) 
                            (arena=2533KB #1)

-    Native Memory Tracking (reserved=4019KB, committed=4019KB)
                            (malloc=186KB #2933) 
                            (tracking overhead=3833KB)

-               Arena Chunk (reserved=187KB, committed=187KB)
                            (malloc=187KB) 

As you can see there is a huge difference between the RSS (2,8GB) and what is actually being shown by VM native memory statistics (1.0GB commited, 1.3GB reserved).

Why there is such huge difference? I understand that RSS also shows the memory allocation for shared libraries but after analysis of pmap verbose output I realized that it is not the shared libraries issue but rather memory is consumed by somehing whas is called [ anon ] structure. Why JVM allocated so much anonymous memory blocks?

I was searching and found out the following topic: Why does a JVM report more committed memory than the linux process resident set size? However the case described there is different, because less memory usage is shown by RSS than by JVM stats. I have opposite situation and can't figure out the reason.

Community
  • 1
  • 1
Konrad
  • 1,605
  • 3
  • 24
  • 45
  • I have same problem , I have a java process with Xmx 1.5g (Oracle Jvm) which is consuming around 3.1 g when I see in the TOP command, however same application when I run in openjdk it consumes around 2.3gb (still higher than allocated xmx), I still have not found out the answer for this. Let me know if you find out a solution. Thanks – Benak Raj Dec 09 '16 at 11:23

3 Answers3

4

I was facing similar issue with one of our Apache Spark job where we were submitting our application as a fat jar, After analyzing thread dumps we figured that Hibernate is the culprit, we used to load hibernate classes on startup of the application which was actually using java.util.zip.Inflater.inflateBytes to read hibernate class files , this was overshooting our native resident memory usage by almost 1.5 gb , here is a bug raised in hibernate for this issue https://hibernate.atlassian.net/browse/HHH-10938?attachmentOrder=desc , the patch suggested in the comments worked for us, Hope this helps.

Benak Raj
  • 318
  • 3
  • 15
  • We are using Jetty and deploying "fat" WAR with all dependencies included. If we deploy "thin" JAR and copy all dependencies to the Jetty libs, it solves the problem. It looks like most of web application servers have similar issue. – Konrad Dec 12 '16 at 05:56
  • @Konrad Could you please explain what you mean by thin jar and copy all dependencies. I am using jetty and experiencing high non-heap memory usage. – cppcoder Jul 27 '17 at 08:50
  • By thin JAR I mean that it contains only: - compiled classes - resources Other dependencies (all other JAR files) are copied to jetty lib directory. In this case when Jetty starts application, it does not have to look for dependencies inside the application JAR/WAR and does not have to extract it to some kind of temp dir. It just goes to his own lib directory and dependencies are there. – Konrad Jul 27 '17 at 15:02
2

After deep analysis according to the following article: https://gdstechnology.blog.gov.uk/2015/12/11/using-jemalloc-to-get-to-the-bottom-of-a-memory-leak/ we found out that the problem is related to memory allocation by java.util.zip.Inflater.

Still need to find out what calls java.util.zip.Inflater.inflateBytes and look for possible solutions.

Konrad
  • 1,605
  • 3
  • 24
  • 45
1

NMT only tracks parts of the memory managed by the JVM, it does not track memory used by native 3rd-party libraries or memory mapped/direct byte buffers.

the8472
  • 40,999
  • 5
  • 70
  • 122
  • 1
    Can you recommend any easy way to have a look on that? I did quick look on process memory dump but didn't find anything suspicious. – Konrad Sep 25 '16 at 09:57
  • visualvm can monitor DBB usage. https://blogs.oracle.com/alanb/entry/monitoring_direct_buffers – the8472 Sep 25 '16 at 10:28
  • I can not see any significant usage by NIO mapped/direct byte buffers. Here you can see direct buffers memory usage: [direct](http://s16.postimg.org/zcdhmk00l/nio_buffers.png). For mapped buffers it shows 0. – Konrad Sep 26 '16 at 10:50
  • Do you have any other ideas what can be checked? – Konrad Sep 28 '16 at 19:23
  • maybe you should post your pmap -x output – the8472 Sep 28 '16 at 19:35