We had a production tomcat crash which generated a hss_err_pid file. This was the information in it-
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 19327352832 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2673), pid=12940, tid=140179071637248
As per my understanding , we were to look if its out of RAM or swap space . But that was not the case. Here is what the top command showed-
top - 10:14:58 up 4:44, 2 users, load average: 0.10, 0.14, 0.43
Tasks: 2737 total, 0 running, 2737 sleeping, 0 stopped, 0 zombie
Cpu(s): 5.9%us, 1.4%sy, 0.2%ni, 92.2%id, 0.1%wa, 0.0%hi, 0.3%si, 0.0%st
Mem: 32130824k total, 18671312k used, 13459512k free, 22892k buffers
Swap: 4194300k total, 0k used, 4194300k free, 180232k cached
Tomcat was consuming 17 GB out of its the total allocated 28 GB. Also Server had a 32 GB RAM. When i looked for similar issues, most of them were because the total xms allocated to JVM was more than what server had.Also there were no other OS process running that were consuming more memory. IS there any other reason that could justify this hs_err_pid log file ?