22

After thinking for a long time of a generic way to pose this question (and failing to find one) I'm just going to ask it as a concrete example:

Suppose I have a Linux machine which has 1 Gb of memory which it can allocate to processes (physical and swap totals 1 Gb).

I have a standard Oracle Hotspot JVM version 7 installed on the machine. If at a given moment, there are enough programs running such that only 400 Mb of that 1 Gb are free, and I start a Java program at that moment with the following JVM flags:

java -Xms256m -Xmx512m -jar myJar.jar

what happends? :

A. does the JVM fail to start right away because it will try to allocate all of the 512 Mb of memory and fail (due to the fact that there's not enough available memory at the moment)?

if the JVM starts:

if at some point the running Java process will need more than 400 Mb of memory (and there's still only 400 Mb of memory that's free other than what the current Java process has already used), what will happen:

B. will the Java process fail with an OutOfMemroyError?

C. will it fail with some other (standard) error?

D. is it undefined behavior?

Peter Lawrey
  • 525,659
  • 79
  • 751
  • 1,130
Shivan Dragon
  • 15,004
  • 9
  • 62
  • 103

6 Answers6

11

-Xmx just defines the maximum size of the heap. It makes no guarantee on wether there is so much memory or not. It only ensures that the heap will never be bigger then the given value. That said, Option B.) will happen, an outOfMemoryError will be thrown.

Polygnome
  • 7,639
  • 2
  • 37
  • 57
  • Thanks. But then, can I assume that if I pass a value *for the Xms* which is bigger than the currently free memory size (i.e. in my exampple, what if I changed the Xms to 500m) then the JVM will fail to start? – Shivan Dragon Nov 07 '12 at 13:49
  • 1
    Take a look here: http://stackoverflow.com/questions/3648454/what-happens-if-you-specify-max-heap-size-greater-than-available-ram – Polygnome Nov 07 '12 at 13:54
  • thanks for the link, that question clears up some of my issues, but it only refers to Xmx depassing actual total memory on the system. I was wondering what happends when the Xmx value depasses the currentlly available (free) memory (physical and swap) available (but doesn't depass the total phisically installed memory of the system nor does it violate the 32bit memory limit ) – Shivan Dragon Nov 07 '12 at 13:58
6

Suppose I have a Linux machine which has 1 Gb of memory which it can allocate to processes (physical and swap totals 1 Gb).

My first response would be, unless you are talking about a phone, I would get more memory. You can buy 16 GB (b = bit, B = byte) for less than $100.

does the JVM fail to start right away because it will try to allocate all of the 512 Mb of memory and fail (due to the fact that there's not enough available memory at the moment)?

This can happen if your system does not have 512 MB (plus some overhead) as it allocates the continuous virtual memory used for the heap on startup.

Even if you have 550 MB free, the program could fail to start as it need to load more than just the heap.

will the Java process fail with an OutOfMemoryError?

This can happen if your program uses 512 MB while running, regardless of the amount of memory your machine has. This error will only occur after your JVM has started. You won't get this error if it cannot start.

will it fail with some other (standard) error?

This is possible if you run out of swap space after the program has started. It is rare and only happens on a severely overloaded machine. What I have seen is the JVM crashes due to a low level OS failure to allocate memory.

Java 6 Update 25 VM crash: insufficient memory

Community
  • 1
  • 1
Peter Lawrey
  • 525,659
  • 79
  • 751
  • 1,130
  • +1 especially for the last remark about the undefined behavior. – Shivan Dragon Nov 07 '12 at 13:50
  • 1
    @Peter - I am not entirely sure, but I believe in case the system has less memory than specified with `-xmx`, the JVM will simply be set to the maximum available, and will not crash immediately. --- I don't know if the same applies to `-xms` though. Probably not. --- I also think the answer should be more generalized, and less focused around the `512 MB` example, as to be clearer to other community members that might find this question in the future. --- +1 for the excellent answer. Specially the last section. – CosmicGiant Nov 07 '12 at 13:50
  • 1
    The -xms setting doesn't make any difference to whether it will start. It tells the GC to grow easily to this size. Note: a "Hello World" program will not use the minimum size just because you set it as a minimum. – Peter Lawrey Nov 07 '12 at 13:53
  • @TheLima: I've wanted to add the exact comment you've just added. My actual issue here is: how can I know at JVM start if a certain amount of memory will be available if needed later? I also don't think the Xmx value can be trusted with that, as if there's not enough actual memory it will scale down and allocate whatever it's free. But is the same true for the Xms value? Or will the JVM fail immediately if the Xms specified value is not available when it starts? – Shivan Dragon Nov 07 '12 at 13:54
  • The Xmx must be greater or equal to the Xms setting and the JVM will fail if it cannot allocate the larger Xmx so what you set the Xms makes no difference as it will always be smaller (or the same) – Peter Lawrey Nov 07 '12 at 13:57
  • So if you set `-xmx` and `-xms` to `2 GB` and the system has more than that available, a crash won't happen until `OutOfMemoryError` is triggered during runtime, but if the system has less than specified, `-xmx` will be set to less than `2 GB`, which will subsequently be smaller than `-xms`, thus triggering a start-up crash. --- As far as I could understand it. – CosmicGiant Nov 07 '12 at 14:04
  • The `-Xmx` cannot be less than the `-Xms` Even if you ask for 2 GB and there is a little over 2 GB free it can fail as the heap is not the only memory the JVM uses. Also the heap has to be continuous which is why the Windows 32-bit JVM can fail if you ask for 1.2 to 1.5 GB, even if you have much more. – Peter Lawrey Nov 07 '12 at 14:12
  • I have a process which xms and xmx is the same, while my process sometimes crashes in my test env since the memory is not left much, I assume that low memory kills my progress, but I cant find a oom error or oom heap dump file(I turn on the option). Do I assume right or not, if I am rignt why there is no OOM file? – JaskeyLam Oct 13 '17 at 03:20
2

OutOfMemroyError will be "Thrown when the Java Virtual Machine cannot allocate an object because it is out of memory, and no more memory could be made available by the garbage collector."

So, in essence, "B. The Java process fail with an OutOfMemroyError".

CosmicGiant
  • 6,275
  • 5
  • 43
  • 58
  • Thanks for the answer. Can I assume that this is true for ALL Hotspot implementations (i.e. on all operating systems for which the Hotspot JVM is available)? – Shivan Dragon Nov 07 '12 at 13:44
2

If you have so much occupied memory that the free space cannot even sustain an idle JVM, you would get either some error saying the program has not enough memory, or the JVM would crash.

If you can run the JVM, you can specify the limit on heap space with -Xmx. That doesn't mean all the heap will be allocated by JVM on start - it is only an internal limit. If the JVM will want to increase the heap space, but there is not enough memory, or you need more heap than specified by -Xmx, you will get OutOfMemoryError in currently running Java programs.

In a very extreme condition, you can run out of free memory while the JVM is running, and at the same time the JVM requires more memory for its internal operation (not the heap space) - then the JVM tells you it needed more memory, but could not get any, and terminates, or it will crash outright.

Jakub Zaverka
  • 8,816
  • 3
  • 32
  • 48
  • I have a process which xms and xmx is the same, while my process sometimes crashes in my test env since the memory is not left much, I assume that low memory kills my progress, but I cant find a oom error or oom heap dump file(I turn on the option). Do I assume right or not, if I am rignt why there is no OOM file? – JaskeyLam Oct 13 '17 at 03:24
1

The JVM process will run in virtual memory, so the question of allocation of other processes running is relevant, but not completely determinitive.

When the JVM cannot allocate more memory (for whatever reason), the process itself doesn't terminate, but rather starts throwing OutOfMemoryError within the JVM, but not external to the JVM. In other words, the JVM continues to run, but the programs running within the JVM will usually fail, because most don't handle low memory conditions adequately. In this fairly common case, when the program does not do anything to handle the error, the JVM will terminate the program and exit. Ultimately, this is from the memory allocation, but not directly so. It's possible for a piece of code to scale itself back under low memory condition and continue to run.

And others have pointed out, it happens sometimes that the JVM itself doesn't handle low memory well, but this is a pretty extreme condition.

eh9
  • 7,340
  • 20
  • 43
0

Despite the question being 10 years old, i was also thinking about this just today and decided to actually try this out. :-)

  • Linux version Linux version 5.10.0-0.bpo.9-amd64
  • JVM version OpenJDK Runtime Environment (build 11.0.14+9-post-Debian-1deb10u1)

Using this small test program:

import java.util.*;

public class OOMTest {
    public static void main(String... atgs){
        var list = new ArrayList<String>();
        while(true){
            list.add(new String("abc"));
        }
    }
}

on a machine having 4G of RAM and 4G swap (this is just my NAS :-) ):

tomi@unyanas:~/workspace$ free -h
              total        used        free      shared  buff/cache   available
Mem:          3.7Gi       980Mi       2.4Gi        27Mi       355Mi       2.4Gi
Swap:         3.7Gi       2.2Gi       1.5Gi
  • when running with 1G heap allowed, the process dies with an OOM:
tomi@unyanas:~/workspace$ java -Xmx1G OOMTest
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
        at OOMTest.main(OOMTest.java:9)
  • when running with 10G heap allowed, then:
    • the process starts as stated above, despite not having that much RAM+swap in the machine at all
    • but the output is clearly not an OOM, the process is just being killed by the kernel:
tomi@unyanas:~/workspace$ java -Xmx10G OOMTest
Killed

So to summarize:

  • getting an OOM when the host runs out of memoery is at least not guaranteed (i got this outcome 3 times out of 3 tries)
  • "overcommiting" the machine capacity with -Xmx is allowed
  • it really seems to be the case that getting an OOM is only guaranteed as long as the "cap" on the process is the -Xmx value (either the default or specified explicitely), but otherwise there would be more free memory (RAM+swap) left to the OS

For the above one can say that this situation was created by the fact that the test code would have infinit memory footprint, so i also wanted to try if a process with high generation rate, but otherwise finit memory footprint can be made fail by setting a too high -Xmx value. So if the GC can be fooled to believe that there is a lot more memory available than really is and in the end be killed, or if it will be notified by the kernel about failed OS level memory allocations and hence restrict the heap size. And the answer is that it can be fooled.

i've altered the above code like this:

import java.util.*;

public class OOMTest {
    public static void main(String... atgs){
        var list = new ArrayList<String>();
        while(true){
            list.add(new String("abc"));
            if (list.size() > 50000000){
                list.remove(list.size() - 1);
            }
        }
    }
}

When specifyint an -Xmx value that the machine can handle, the program could run any long (well i really let it run as long as i was having dinner, but you get the point :-) )

So this never exits (when enabling GC logging, once the 2G heap size is reached, a nice repeating pattern can be observed):

java -Xmx2G OOMTest

But when running with Xmx10G, the process is killed again, wihtout an OOM:

tomi@unyanas:~/workspace$ java -Xmx10G OOMTest
Killed

This suggests that the only "constructive feedback" the JVM gets when it attempts to allocate more memory than currently available on the host as RAM+swap is something like a kill -9. And hence by using too high -Xmx values a process that would otherwise function correctly can be made fail. This is by no means to say that this would happen on all OS-es, JVM implementations or even just GC algorithms (i was using the default G1), but this was definitely the case with the above set up.