Despite the question being 10 years old, i was also thinking about this just today and decided to actually try this out. :-)
- Linux version Linux version 5.10.0-0.bpo.9-amd64
- JVM version OpenJDK Runtime Environment (build 11.0.14+9-post-Debian-1deb10u1)
Using this small test program:
import java.util.*;
public class OOMTest {
public static void main(String... atgs){
var list = new ArrayList<String>();
while(true){
list.add(new String("abc"));
}
}
}
on a machine having 4G of RAM and 4G swap (this is just my NAS :-) ):
tomi@unyanas:~/workspace$ free -h
total used free shared buff/cache available
Mem: 3.7Gi 980Mi 2.4Gi 27Mi 355Mi 2.4Gi
Swap: 3.7Gi 2.2Gi 1.5Gi
- when running with 1G heap allowed, the process dies with an OOM:
tomi@unyanas:~/workspace$ java -Xmx1G OOMTest
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at OOMTest.main(OOMTest.java:9)
- when running with 10G heap allowed, then:
- the process starts as stated above, despite not having that much RAM+swap in the machine at all
- but the output is clearly not an OOM, the process is just being killed by the kernel:
tomi@unyanas:~/workspace$ java -Xmx10G OOMTest
Killed
So to summarize:
- getting an OOM when the host runs out of memoery is at least not guaranteed (i got this outcome 3 times out of 3 tries)
- "overcommiting" the machine capacity with -Xmx is allowed
- it really seems to be the case that getting an OOM is only guaranteed as long as the "cap" on the process is the -Xmx value (either the default or specified explicitely), but otherwise there would be more free memory (RAM+swap) left to the OS
For the above one can say that this situation was created by the fact that the test code would have infinit memory footprint, so i also wanted to try if a process with high generation rate, but otherwise finit memory footprint can be made fail by setting a too high -Xmx value. So if the GC can be fooled to believe that there is a lot more memory available than really is and in the end be killed, or if it will be notified by the kernel about failed OS level memory allocations and hence restrict the heap size. And the answer is that it can be fooled.
i've altered the above code like this:
import java.util.*;
public class OOMTest {
public static void main(String... atgs){
var list = new ArrayList<String>();
while(true){
list.add(new String("abc"));
if (list.size() > 50000000){
list.remove(list.size() - 1);
}
}
}
}
When specifyint an -Xmx value that the machine can handle, the program could run any long (well i really let it run as long as i was having dinner, but you get the point :-) )
So this never exits (when enabling GC logging, once the 2G heap size is reached, a nice repeating pattern can be observed):
java -Xmx2G OOMTest
But when running with Xmx10G, the process is killed again, wihtout an OOM:
tomi@unyanas:~/workspace$ java -Xmx10G OOMTest
Killed
This suggests that the only "constructive feedback" the JVM gets when it attempts to allocate more memory than currently available on the host as RAM+swap is something like a kill -9. And hence by using too high -Xmx values a process that would otherwise function correctly can be made fail. This is by no means to say that this would happen on all OS-es, JVM implementations or even just GC algorithms (i was using the default G1), but this was definitely the case with the above set up.