25

For an update of this question - see below.

I experience a (reproducible, at least for me) JVM crash (not an OutOfMemoryError) (The application which crashes is eclipse 3.6.2). However, looking at the crash log makes me wonder:

#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 65544 bytes for Chunk::new
# Possible reasons:
#   The system is out of physical RAM or swap space
#   In 32-bit mode, the process size limit was hit
# Possible solutions:
#   Reduce memory load on the system
#   Increase physical memory or swap space
#   Check if swap backing store is full
#   Use 64 bit Java on a 64 bit OS
#   Decrease Java heap size (-Xmx/-Xms)
#   Decrease number of Java threads
#   Decrease Java thread stack sizes (-Xss)
#   Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.

Current thread (0x531d6000):  JavaThread "C2 CompilerThread1" daemon 
[_thread_in_native, id=7812, stack(0x53af0000,0x53bf0000)]

Stack: [0x53af0000,0x53bf0000],  sp=0x53bee860,  free space=1018k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [jvm.dll+0x1484aa]
V  [jvm.dll+0x1434fc]
V  [jvm.dll+0x5e6fc]
V  [jvm.dll+0x5e993]
V  [jvm.dll+0x27a571]
V  [jvm.dll+0x258672]
V  [jvm.dll+0x25ed93]
V  [jvm.dll+0x260072]
V  [jvm.dll+0x24e59a]
V  [jvm.dll+0x47edd]
V  [jvm.dll+0x48a6f]
V  [jvm.dll+0x12dcd4]
V  [jvm.dll+0x155a0c]
C  [MSVCR71.dll+0xb381]
C  [kernel32.dll+0xb729]

I am using Windows XP 32-bit SP3. I have 4GB RAM. Before starting the application I had 2 GB free according to the task manager (+ 1 GB system cache which might be freed as well.). I am definitely having enough free RAM.

From the start till the crash I logged the JVM memory statistics using visualvm and jconsole. I acquired the memory consumption statistics until the last moments before the crash.

The statistics shows the following allocated memory sizes:

  • HeapSize: 751 MB (used 248 MB)
  • Non-HeapSize(PermGen & CodeCache): 150 MB (used 95 MB)
  • Size of memory management areas (Edenspace, Old-gen etc.): 350 MB
  • Thread stack sizes: 17 MB (according to oracle and due the fact that 51 threads are running)

I am running the application (jre 6 update 25, server vm) using the parameters:

-XX:PermSize=128m
-XX:MaxPermSize=192m
-XX:ReservedCodeCacheSize=96m
-Xms500m
-Xmx1124m

Question:

  • Why does the JVM crash when there's obviously enough memory on the VM and OS?
    With the above settings I think that I cannot hit the 2GB 32-bit limit (1124MB+192MB+96MB+thread stacks < 2GB). In any other case (too much heap allocation), I would rather expect an OutOfMemoryError than a JVM crash

Who can help me to figure out what is going wrong here?

(Note: I upgraded recently to Eclipse 3.6.2 from Eclipse 3.4.2 and from Java 5 to Java 6. I suspect that there's a connection between the crashes and these changes because I haven't seen these before)

UPDATE

It seems to be a JVM bug introduced in Java 6 Update 25 and has something to do with the new jit compiler. See also this blog entry. According to the blog, the fix of this bug should be part of the next java 6 updates. In the meanwhile, I got a native stack trace during a crash. I've updated the above crash log.

The proposed workaround, using the VM argument -XX:-DoEscapeAnalysis works (at least it notably lowers the probability of a crash)

Shehan Dhaleesha
  • 627
  • 1
  • 10
  • 30
MRalwasser
  • 15,605
  • 15
  • 101
  • 147
  • You may be setting the maximum memory size too high for the 32-bit space to support. Usually the JVM detects this but you could be close to the limit in a way it can't detect. – Peter Lawrey Jun 14 '11 at 13:55
  • If you increase your `PermSize` to `512m` and add `-XX:PermSize=512m`, does the error still occur? – Buhake Sindi Jun 14 '11 at 13:56
  • 2
    Which version of Java 6 is this? The description looks similar to [this bug ID](http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7042582), except for the thread dump. – Vineet Reynolds Jun 14 '11 at 13:59
  • 1
    Check the bug report then and try the workaround. Or use u23. – Vineet Reynolds Jun 14 '11 at 14:03
  • @Vineet According to the report (it's a duplicate, also have a look at the original bug report) the memory problem is caused when using `-XX:+DoEscapeAnalysis`. However, I do not use that option. – MRalwasser Jun 14 '11 at 14:07
  • What would hurt, if you tried using `-XX:-DoEscapeAnalysis`? – Vineet Reynolds Jun 14 '11 at 14:10
  • I think that option causes the problem according to the original bug report (which contradicts the duplicate). I think that in the duplicate which you linked suffers from a typing error in the proposed workaround "Try -XX:-DoEscapeAnalysis." which should rather be "Try to omit -XX:-DoEscapeAnalysis." Nevertheless, I will try that setting. – MRalwasser Jun 14 '11 at 14:13
  • Well, if all else fails, go to an earlier build of the Oracle JVM, or even JRockit. – Vineet Reynolds Jun 14 '11 at 14:21
  • I have the same problem in windows 2003 even if I upgrade the JDK onto 6u33-b03 x86 – 爱国者 Jul 18 '12 at 06:22
  • What is your actual question here? – Amir Afghani Jan 21 '20 at 17:48

4 Answers4

1

2GB on 32-bit JVM on Windows is incorrect. https://blogs.sap.com/2019/10/07/does-32-bit-or-64-bit-jvm-matter-anymore/

Since you are on Windows-XP you are stuck with a 32 bit JVM.

The max heap is 1.5GB on 32 bit VM on Windows. You are at 1412MB to begin with without threads. Did you try decreasing the swap stack size -Xss, and have you tried eliminating the PermSize allocated initially: -XX:PermSize=128m? Sounds like this is an eclipse problem, not a memory-problem per-se.

Can you move to a newer JVM or different (64-bit) JVM on a different machine? Even if you are targeting windows-XP there is no reason to develop on it, unless you HAVE to. Eclipse can run, debug and deploy code on remote machines easily.

Eclipse's JVM can be different then the JVM of things you run in or with eclipse. Eclipse is a memory pig. You can eliminate unnecessary eclipse plug-ins to use less eclipse memory, it comes with things out of the box you probably don't need or want.

Try to null out references (to eliminate circularly un-collectible GC objects), re-use allocated memory, use singletons, and profile your memory usage to eliminate unnecessary objects, references and allocations. Additional tips:

  • Prefer static memory allocation, i.e allocate once per VM as opposed to dynamically.
  • Avoid creation of temporary objects within functions - consider a reset() method which can allow the object to reused
  • Avoid String mutations and mutation of auto boxed types.
ggb667
  • 1,881
  • 2
  • 20
  • 44
0

I think that @ggb667 has nailed it with the reason your JVM is crashing. 32-bit Windows architectural constraints limit the available RAM for a Java application to 1.5GB1 ... not 2GB as you surmised. Also, you have neglected to include the address space occupied by the code segment of the executable, shared libraries, the native heap, and "other things".

Basically, this is not a JVM bug. You are simply running against the limitations of your hardware and operating system.

There is a possible solution in the form of PAE (Physical Address Extension) support in some versions of Windows. According to the link, Windows XP with PAE makes available up to 4GB of usable address spaces to user processes. However, there are caveats about device driver support.

Another possible solution is to reduce the max heap size, and do other things to reduce the application's memory utilization; e.g. in Eclipse reduce the number of "open" projects in your workspace.

See also: Java maximum memory on Windows XP

1 - Different sources say different things about the actual limit, but it is significantly less than 2GB. To be frank, it doesn't matter what the actual limit is.


In an ideal world this question should no longer be of practical interest to anyone. In 2020:

  • You shouldn't be running Windows XP. It has been end of life since April 2014
  • You shouldn't be running Java 6. It has been end of life since April 2013
  • If you are still running Java 6, you should be at the last public patch release: 1.6.0_45. (Or a later 1.6 non-public release if you have / had a support contract.)

Either way, you should not be running Eclipse on this system. Seriously, you can get a new 64-bit machine for a few hundred dollars with more memory, etc that will allow you to run an up-to-date operating system and an up-to-date Java release. You should use that to run Eclipse.

If you really need to do Java development on an old 32-bit machine with an old version of Java (because you can't afford a newer machine) you would be advised to use a simple text editor and the Java 6 JDK command line tools (and a 3rd-party Java build tool like Ant, Maven, Gradle).

Finally, if you are still trying to run / maintain Java software that is stuck on Java 6, you should really be trying to get out of that hole. Life is only going to get harder for you:

  • If the Java 6 software was developed in-house or you have source code, port it.
  • If you depend on proprietary software that is stuck on Java 6, look for a new vendor.
  • If management says no, put it to them that they may need to "turn it off".

You / your organization should have dealt with this issue this SEVEN years ago.

Stephen C
  • 698,415
  • 94
  • 811
  • 1,216
-1

I stumbled upon a similar problem at work. We had set -Xmx65536M for our application but kept getting exactly the same kind of errors. The funny thing is that the errors happened always at a time when our application was actually doing pretty lightweight calculations, relatively speaking, and was thus nowhere near this limit.

We found a possible solution for the problem online: http://www.blogsoncloud.com/jsp/techSols/java-lang-OutOfMemoryError-unable-to-create-new-native-thread.jsp , and it seemed to solve our problem. After lowering -Xmx to 50G, we've had none of these issues.

What actually happens in the case is still somewhat unclear to us.

jesseniem
  • 177
  • 3
  • 10
-2

The JVM has its own limits that will stop it long before it hits the physical or virtual memory limits. What you need to adjust is the heap size, which is with another one of the -X flags. (I think it's something creative like -XHeapSizeLimit but I'll check in a second.)

Here we go:

-Xmsn Specify the initial size, in bytes, of the memory allocation pool. This value must be a multiple of 1024 greater than 1MB. Append the letter k or K to indicate kilobytes, or m or M to indicate megabytes. The default value is 2MB. Examples:

   -Xms6291456
   -Xms6144k
   -Xms6m

-Xmxn Specify the maximum size, in bytes, of the memory allocation pool. This value must a multiple of 1024 greater than 2MB. Append the letter k or K to indicate kilobytes, or m or M to indicate megabytes. The default value is 64MB. Examples:

   -Xmx83886080
   -Xmx81920k
   -Xmx80m
Charlie Martin
  • 110,348
  • 25
  • 193
  • 263
  • 1
    You mean -Xmx ? This is what I am setting here. And as stated in the post, I theoretically cannot reach that limit and even if I would, I would rather getting an OutOfMemoryError - so this cannot be the case, IMHO. – MRalwasser Jun 14 '11 at 13:56
  • You don't know it can't be the case, since you're only looking at the last sample *before* you crash. Have a look here http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html and try setting -XX:-HeapDumpOnOutOfMemoryError – Charlie Martin Jun 14 '11 at 14:03
  • You'll want to look at jmap as well http://www.oracle.com/technetwork/articles/javase/monitoring-141801.html#Heap_Dump – Charlie Martin Jun 14 '11 at 14:04
  • 4
    @Charlie I do not get an OutOfMemoryError so the option you suggest cannot fire. I am getting a jvm crash, which is something different. – MRalwasser Jun 14 '11 at 14:09
  • 1
    @MR I'm glad you know what's wrong. But then why are you asking? – Charlie Martin Jun 14 '11 at 19:06
  • I am sorry if you took that the wrong way. I did not say I know what's wrong - but I know that it's not an OutOfMemoryError. And when the jvm crashes (=core dump) regular error handlers like the heap dump mechanisms you suggested are not available. And BTW - I am almost already at the maximum heap size for my OS. – MRalwasser Jun 14 '11 at 19:38