1

I have been experiencing Java VM crashes using the G1 garbage collector. We get hs_err_pid.log files generated with the following signatures:

#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 32744 bytes for ChunkPool::allocate
# Possible reasons:
#   The system is out of physical RAM or swap space
#   In 32 bit mode, the process size limit was hit
# Possible solutions:
#   Reduce memory load on the system
#   Increase physical memory or swap space
#   Check if swap backing store is full
#   Use 64 bit Java on a 64 bit OS
#   Decrease Java heap size (-Xmx/-Xms)
#   Decrease number of Java threads
#   Decrease Java thread stack sizes (-Xss)
#   Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.

We are currently monitoring the memory availability and preempting any out of memory errors by using Runtime.maxMemory, freeMemory and totalMemory. The idea is that we can pause operations and warn the user that they need to allocate more memory. But we are seeing the above JVM crashes even when the Runtime.freeMemory is reporting lots of free memory.

Is there any way as a java desktop application we can avoid this happening and insulate ourselves from memory load on the system. For example is there any combination of start up options that we can provide i.e. would setting -Xms and -Xmx to the same value help us here? At the moment we only set -Xmx.

I am keen to avoid the poor user experience of the Jvm silently crashing. Ideally we would like to be able to detect when the JVM is getting close to getting out of memory and take appropriate action.

Here is a bit more information taken from the hs_err_pid.log for one example crash. This was using -Xmx4g, total physical memory 12gb, free physical memory 1.79gb.

Native memory allocation (malloc) failed to allocate 32744 bytes for ChunkPool::allocate

Heap:
 garbage-first heap   total 4194304K, used 3140260K [0x00000006c0000000, 0x00000006c0108000, 0x00000007c0000000)
  region size 1024K, 1526 young (1562624K), 26 survivors (26624K)
 Metaspace       used 78244K, capacity 95308K, committed 96328K, reserved 1122304K
  class space    used 11319K, capacity 22311K, committed 23112K, reserved 1048576K

Memory: 4k page, physical 12269248k(1790928k free), swap 37973052k(362096k free)
  • Would it be possible to list down few more details(Total Physical RAM, OS, Other major memory hungry processes running, and your vm arguments to start the JVM) – dharam Mar 16 '18 at 09:54
  • In turn I will recommend you to look at the other similar questions on StackOverflow. There are plenty of them.https://stackoverflow.com/search?q=There+is+insufficient+memory+for+the+Java+Runtime+Environment+to+continue. – dharam Mar 16 '18 at 09:56
  • What JVM and what version of it are you using? – Erwin Bolwidt Mar 16 '18 at 10:02
  • This is the JVM running out of memory. As noted, this can be *because* too much heap is reserved. If you have plenty of free heap, try shrinking it. – Peter Lawrey Mar 17 '18 at 12:40
  • The log lists many things to investigate, you should list what you have ruled out and based on which data to avoid redundant work here. – the8472 Mar 17 '18 at 15:14
  • @dharam I'll give you one example - 12gb Total Physical Ram. Windows 10. VM args: -XX:+UseG1GC, -XX:+UseStringDeduplication. Java Version '1.8.0_161. Free memory was 1.79gb and it failed trying to allocate 32kb. – Danny Gonzalez Mar 20 '18 at 11:06

1 Answers1

0

would setting -Xms and -Xmx to the same value help us here

Probably not. Your JVM heap space is not the issue, with the caveat that if you've allowed more heap space than the OS can provide then you're going to hit problems.

The key part of the error message is:

Native memory allocation (malloc) failed to allocate 32744 bytes for ChunkPool::allocate

malloc() will fail when memory cannot be allocated to the JVM process by the OS. Things to check:

  1. Monitor and record the overall machine memory usage, including swap, while your app is running.

  2. Check if there are admin-imposed limits on your user's process size using ulimit -m. Shared servers often have limits imposed to stop one user hogging all the resources.

  3. If running in a container then both the above apply but you'll also need to check resource limits imposed by the container management technology (e.g. Kubernetes).

Andy Brown
  • 11,766
  • 2
  • 42
  • 61
  • Thanks Andy. At the moment we monitor memory usage using java.lang.Runtime methods. What do you suggest for monitoring overall machine memory usage and swap? – Danny Gonzalez Mar 20 '18 at 11:01