1

I am experiencing JVM could not allocate memory issue in Java

OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000734880000, 880279552, 0) failed; error='Cannot allocate memory' (errno=12)

There is insufficient memory for the Java Runtime Environment to continue. Native memory allocation (mmap) failed to map 880279552 bytes for committing reserved memory. An error report file with more information is saved as: /home/ec2-user/tools/apache/apache-tomcat-9.0.6/bin/hs_err_pid23366.log java.lang.NullPointerException

Here is my memory stats

MemTotal:        8166744 kB
MemFree:         3788780 kB
MemAvailable:    3861816 kB
Buffers:               0 kB
Cached:           286536 kB
SwapCached:            0 kB
Active:          4030520 kB
Inactive:         182596 kB
Active(anon):    3926808 kB
Inactive(anon):    24892 kB
Active(file):     103712 kB
Inactive(file):   157704 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:               192 kB
Writeback:             0 kB
AnonPages:       3926652 kB
Mapped:            72652 kB
Shmem:             25120 kB
Slab:             100300 kB
SReclaimable:      60032 kB
SUnreclaim:        40268 kB
KernelStack:        5616 kB
PageTables:        21632 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     4083372 kB
Committed_AS:    5723980 kB
VmallocTotal:   34359738367 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:      286720 kB
DirectMap2M:     8101888 kB

This may seem like a duplicate JVM out of heap memory question. Closest that comes to this issue is the this thread

However, the difference is that in linked thread User had less free space and JVM was trying to allocate higher memory (open and shut case).

In my case JVM is trying to allocate 880279552 btes (0.8 GB) and my free memory (above) is 3.7 GB. What could be the reason that JVM, although having almost 4 times free memory is unable to allocate it? A side question: Why is it trying to allocate 0.8 GB memory. Is this normal? Is there a way to go deeper to find out such allocations via a tool? Can anyone point to a resource to understand the memory stats above better?

Here is my JVM config in setenv.sh (It's an 8 GB RAM machine)

export CATALINA_HOME="/home/ec2-user/tools/apache/apache-tomcat-9.0.6/"
export JAVA_OPTS="-Xms2048m -Xmx4096m -DJDBC_CONNECTION_STRING=jdbc:mysql://localhost:3306/databasename?autoReconnect=true -DJDBC_DATABASER=dbname-DJDBC_USER=username-DJDBC_PASSWORD=password-DAPPLICATION_PRO$
export JAVA_HOME="/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.201.b09-0.amzn2.x86_64"

After the crash, here is the top memory consuming services and after java consuming 3.1 GB, MySQL has 0.5 GB used (again confirming the first memory stats screenshot which said 4 GB memory was free/available)

  PID  PPID CMD                         %MEM %CPU
 4890     1 /usr/lib/jvm/java-1.8.0-ope 38.1  1.0
23204     1 /usr/sbin/mysqld --daemoniz  7.1  0.9
26056  3484 node /home/ec2-user/tools/j  1.3  111
 3548  3484 node /home/ec2-user/tools/j  1.1  0.3
 3484     1 PM2 v3.5.0: God Daemon (/ro  0.7  0.6
26067 26056 /root/.nvm/versions/node/v1  0.3  7.5
26074 26067 /root/.nvm/versions/node/v1  0.3  7.5
 3610  3548 /root/.nvm/versions/node/v1  0.3  0.0
 3624  3610 /root/.nvm/versions/node/v1  0.3  0.0

Any help to understand this is appreciated.

Community
  • 1
  • 1
veritas
  • 378
  • 1
  • 6
  • 16
  • Maybe this helps: https://stackoverflow.com/questions/14763079/what-are-the-xms-and-xmx-parameters-when-starting-jvm – vlumi Jul 15 '19 at 13:22
  • No it doesnt. The thread you shared only explains xms and xmx. I have earlier got a heap issue when memory xms xmx was 1024 & 2048 respectively. This time, it was 2048 and 4096, double than before! There's something else going on here. – veritas Jul 15 '19 at 13:41
  • https://serverfault.com/questions/853561/native-memory-allocation-mmap-failed-to-map-bytes-for-committing-reserved-memo Potentially it is the direct memory that's the issue. Does it help to include '-XX:MaxDirectMemorySize' as an option? Also I find that jmap and jstat are useful tools for monitoring memory usage – user27158 Jul 15 '19 at 14:19
  • @user2181576 With xmx you are capping your available heap memory for your process to 4GB, so it won't use more memory even if there would be more physical memory available. How much was your process using memory at the time of the crash? Anywhere close to the heap limit? – vlumi Jul 16 '19 at 00:07
  • Thanks vlumi. My question is, why didn't Java starts its GC to do me the needful i.e get free memory which is one the chart above allocated to my heap? If heap memory increased, it could also have cleaned the old gen space. – veritas Jul 16 '19 at 10:22
  • See also https://stackoverflow.com/questions/26382989/openjdk-client-vm-cannot-allocate-memory – Raedwald Aug 10 '19 at 11:41

1 Answers1

1

@vlumi and others tried to guide me in the right direction of "no direct memory available". Nonetheless, I additionally began to experience other issues of "out of heap memory" "out of memory".

The problem was the following: After re-deployment of WAR on the tomcat server, existing threads were not killed and their references remained, hence GC wasn't able to clean it. The original metaspace(earlier called PermGen) had 80MB and this grew to 125 as the application chugged. Once redeployed, intead of returning to 80MB, the metaspace on the VisualVM profiler showed 170MB, in the next redeployment showed 210. This clearly indicated that after another dozen redeployments (this is a test server), the JVM would run out of RAM space to allocation and throw a heap memory.

To fix this, we have added a tomcat7:shutdown (or a restart via shell script) to the Jenkins job. Stopping and Starting tomcat is able to default back to the metaspaces as before. Some others have tried to kill the jawa process so that all threads are killed.

Thank you everyone for contributing. This is the link to the website that helped the most to understand what's really going on.

veritas
  • 378
  • 1
  • 6
  • 16