3

We had a production tomcat crash which generated a hss_err_pid file. This was the information in it-

# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 19327352832 bytes for committing reserved memory.
# Possible reasons:
#   The system is out of physical RAM or swap space
#   In 32 bit mode, the process size limit was hit
# Possible solutions:
#   Reduce memory load on the system
#   Increase physical memory or swap space
#   Check if swap backing store is full
#   Use 64 bit Java on a 64 bit OS
#   Decrease Java heap size (-Xmx/-Xms)
#   Decrease number of Java threads
#   Decrease Java thread stack sizes (-Xss)
#   Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
#  Out of Memory Error (os_linux.cpp:2673), pid=12940, tid=140179071637248

As per my understanding , we were to look if its out of RAM or swap space . But that was not the case. Here is what the top command showed-

 top - 10:14:58 up  4:44,  2 users,  load average: 0.10, 0.14, 0.43
 Tasks: 2737 total,   0 running, 2737 sleeping,   0 stopped,   0 zombie
 Cpu(s):  5.9%us,  1.4%sy,  0.2%ni, 92.2%id,  0.1%wa,  0.0%hi,  0.3%si,  0.0%st
 Mem:  32130824k total, 18671312k used, 13459512k free,    22892k buffers
 Swap:  4194300k total,        0k used,  4194300k free,   180232k cached

Tomcat was consuming 17 GB out of its the total allocated 28 GB. Also Server had a 32 GB RAM. When i looked for similar issues, most of them were because the total xms allocated to JVM was more than what server had.Also there were no other OS process running that were consuming more memory. IS there any other reason that could justify this hs_err_pid log file ?

  • Similar to this problem https://stackoverflow.com/questions/37389261/insufficient-memory-for-the-java-runtime-environment-to-continue-though-ram-is-s – Glim Apr 27 '18 at 21:41
  • @glim Though it seems similar but solution mentioned over there don't apply to my problem – badass_programmer Apr 27 '18 at 21:51
  • 1
    The error message says JVM has failed to allocate 18 GB (most likely, heap). `top` tells there is only 13 GB free. Seems right. – apangin Apr 28 '18 at 00:11
  • tomcat process was consuming approx 18 GB at that moment. It's max is 28 GB. Could you be more specific as to why it seems right. Its showing that it needs to map 19327352832 bytes ,but again it was already consuming that much amount of memory – badass_programmer Apr 30 '18 at 02:07
  • 32- or 64-bit JVM? Run `java -version` to see. Fails on startup or later on? What heap parameters were given to the JVM on startup? – Christopher Schultz May 08 '18 at 18:35
  • java version "1.8.0_51" Java(TM) SE Runtime Environment (build 1.8.0_51-b16) Java HotSpot(TM) 64-Bit Server VM (build 25.51-b03, mixed mode) – badass_programmer May 09 '18 at 09:54
  • ...and the answers to the other 2 questions? – Christopher Schultz May 09 '18 at 16:27
  • Failed later , Xmx=28GB Xms=24Gb , NewSize=6GB ,Survivor ratio=4,And we were using incGC , these were few of the main parameters – badass_programmer May 10 '18 at 05:35

0 Answers0