948

I get this error message as I execute my JUnit tests:

java.lang.OutOfMemoryError: GC overhead limit exceeded

I know what an OutOfMemoryError is, but what does GC overhead limit mean? How can I solve this?

İsmail Y.
  • 3,579
  • 5
  • 21
  • 29
Mnementh
  • 50,487
  • 48
  • 148
  • 202
  • 18
    This sounds very interesting. I'd love if someone could post some code that generates this. – Buhb Sep 08 '09 at 12:21
  • 2
    I simply found the problem, that lead to too much memory-usage, near to the limit of the heap. A simple solution could be simply to give some more Heap-memory to the Java-Engine (-Xmx) but this only helps, if the application needs exactly as much memory, as the heap-limit before was set. – Mnementh Oct 23 '09 at 09:10
  • also check out http://xmlandmore.blogspot.com/2011/05/diagnosing-javalangoutofmemory.html – Vik Oct 13 '12 at 05:02
  • 1
    @Mnementh i had given an answer here check whether it helps http://stackoverflow.com/questions/11091516/exception-in-thread-main-java-lang-outofmemoryerror-gc-overhead-limit-exceede – lulu May 13 '14 at 10:35
  • 16
    @SimonKuang Note that there are multiple `OutOfMemoryError` scenarios for which increasing the heap isn't a valid solution: running out of native threads and running out of perm gen (which is separate from heap) are two examples. Be careful about making overly broad statements about `OutOfMemoryErrors`; there's an unexpectedly diverse set of things that can cause them. – Tim Jan 22 '15 at 18:25
  • 3
    How did you solve the issue?? – Thorsten Niehues Nov 14 '17 at 10:39
  • 2
    This error happened and still happening for me with Jdk1.8.0_91 – Parasu Nov 15 '17 at 04:40
  • 1
    @Thorsten Niehues: Well, I solved it basically by using less memory. As the top answer shows, this is a situation with very few remaining memory, while creating a lot of temporary objects. Basically there is no way around it, you have to reduce memory usage. – Mnementh Nov 15 '17 at 14:17
  • @Buhb I just had this error on Hadoop 3.2.0 running a local (non-cluster) job of sorting 28GB of data. It was working for several hours and exploded with this error just before the end of job. Normally this dataset with this map-reduce task completes okay. – Eugene Gr. Philippov Feb 02 '19 at 13:19
  • 1
    For future visitors - Any answer which simply tells you to increase the heap size (search for `javaMaxHeapSize` or `Xmx`) might solve your problem, if you are working with that amount of data. But you really need to look at your code to limit data usage. Try sampling a smaller amount of data or limiting the records you process. If you have no other option, run your code on a machine in the cloud which can provide you with as much memory as you want. – smaug Feb 19 '19 at 15:56
  • 1
    @smaug Machines in the cloud are still physical computers somewhere. Each type you can rent has a certain amount of RAM. Another consideration is that larger machines will cost way more money than smaller ones. – user904963 Feb 06 '22 at 16:15

22 Answers22

859

This message means that for some reason the garbage collector is taking an excessive amount of time (by default 98% of all CPU time of the process) and recovers very little memory in each run (by default 2% of the heap).

This effectively means that your program stops doing any progress and is busy running only the garbage collection at all time.

To prevent your application from soaking up CPU time without getting anything done, the JVM throws this Error so that you have a chance of diagnosing the problem.

The rare cases where I've seen this happen is where some code was creating tons of temporary objects and tons of weakly-referenced objects in an already very memory-constrained environment.

Check out the Java GC tuning guide, which is available for various Java versions and contains sections about this specific problem:

Manuel
  • 649
  • 8
  • 13
Joachim Sauer
  • 302,674
  • 57
  • 556
  • 614
  • 13
    Would it be correct to summarise your answer as follows: "It's just like an 'Out of Java Heap space' error. Give it more memory with -Xmx." ? – Tim Cooper Jun 19 '10 at 13:28
  • 70
    @Tim: No, that wouldn't be correct. While giving it more memory *could* reduce the problem, you should also look at your code and see why it produces that amount of garbage and why your code skims just below the "out of memory" mark. It's often a sign of broken code. – Joachim Sauer Jun 20 '10 at 15:48
  • 11
    Thanks, it seems Oracle isn't actually that good in data migration, they broke the link. – Joachim Sauer Nov 29 '10 at 19:30
  • I am investigating the same problem in a application running on Weblogic. It runs on a server shared with other application running on Weblogic. Does the error mean it always has to do with my application therefore can rule out problems in the other applications on the same server? Or is there a possibility other applications can interfere with your environment. Just asking cause it's hard to find the cause of memory leaks. – A.W. Mar 01 '12 at 11:43
  • 5
    @Guus: if multiple applications run in the same JVM, then yes, they can easily influence each other. It'll be hard to tell which one is misbehaving. Separating the applications into distinct JVMs might be the easiest solution. – Joachim Sauer Mar 01 '12 at 11:48
  • @Joachim: server is located at a client. I checked with them and the applications do run in separate JVMs. I stress tested the app on our server and cannot get it to go out of memory. Could another process (java or non java) on the server somehow be the cause of my app to go out of memory? – A.W. Mar 02 '12 at 14:46
  • @Guus: no, especially not with the error message discussed here. It's more likely to be an artifact of configuration and/or specific loads that trigger the problem. But you really ought to ask this in a separate question (with as much detail as possible), it's getting too much for the comments here. – Joachim Sauer Mar 02 '12 at 16:37
  • 3
    @TimCooper - that's honestly a poor answer even for the Out of Java Heap space error, though it's certainly sometimes necessary. To trigger this Error, however, you really have to be beating up the JVM, it's quite good at efficiently collecting garbage. If you're seeing this error, it is *far* more likely you're doing something violently cruel to the JVM than it is that you're simply overloading the heap. – dimo414 Jul 31 '13 at 00:26
  • Is this specific to Java 6? Does the same issue happen in Java 7? – onionjake Mar 17 '14 at 16:32
  • 3
    @TimCooper: Just giving more memory is often quite.. blunt tool for resolving issues like this. It's often more useful to look first at if you create lot of new objects, but also if you memory is properly split. Often the problem is that one of the three areas is at upper limit, but the others have plenty of free space. Then re-partitioning the JVM memory pools would help. – Zds May 07 '14 at 10:17
  • 4
    I'd just had this happen to me with Java 7 and a web application containing 2001670 lines of Java code, of which I wrote about 5. "You should also look at your code" is not so easy in such cases. – reinierpost Feb 19 '16 at 13:18
  • Looking for a help to my issue, I've found this http://stackoverflow.com/questions/110083/which-loop-has-better-performance-why#110389. The GC would be affected whether create an object inside or out a loop? – deldev Feb 24 '17 at 21:02
  • Today I study different GCs for the same code. SerialGC does not have this problem, but ParallelGC does. I don't know the reason yet. – Tiina Apr 08 '20 at 09:58
  • @TimCooper, I had Out of Java Heap error at first , then I increased it's memory, now I'm getting GC overhead error! :D – Soheil Rahsaz Sep 25 '21 at 10:14
248

Quoting from Oracle's article "Java SE 6 HotSpot[tm] Virtual Machine Garbage Collection Tuning":

Excessive GC Time and OutOfMemoryError

The parallel collector will throw an OutOfMemoryError if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, an OutOfMemoryError will be thrown. This feature is designed to prevent applications from running for an extended period of time while making little or no progress because the heap is too small. If necessary, this feature can be disabled by adding the option -XX:-UseGCOverheadLimit to the command line.

EDIT: looks like someone can type faster than me :)

Community
  • 1
  • 1
dave
  • 11,641
  • 5
  • 47
  • 65
  • 94
    "You can turn this off..." but the OP most likely should not do this. – Stephen C Sep 08 '09 at 12:57
  • 2
    Can you tell me the difference between "-XX" and "-Xmx"? I was able to turn it off using the "-Xmx" option too. – Susheel Javadi May 14 '10 at 12:37
  • 23
    Replying to a very old comment here, but... @Bart The `-XX:` at the start of several command line options is a flag of sorts indicating that this option is highly VM-specific and unstable (subject to change without notice in future versions). In any case, the `-XX:-UseGCOverheadLimit` flag tells the VM to disable GC overhead limit checking (actually "turns it off"), whereas your `-Xmx` command merely increased the heap. In the latter case the GC overhead checking was still *running*, it just sounds like a bigger heap solved the GC thrashing issues *in your case* (this will not always help). – Andrzej Doyle Feb 18 '11 at 10:51
  • 1
    In my application (reading a large Excel file in Talend) this did not work and from other users explanation I understand why. This just disables the error but the problem persists and your application will just spend most of its time handling GC. Our server had plenty of RAM so I used the suggestions by Vitalii to increase the heap size. – RobbZ Mar 09 '16 at 10:59
  • You will eventually get this error if your application is data intensive, clearing the memory and evading data leak is the best way out - but requires some time. – Pievis Feb 01 '19 at 16:58
  • Before trying out any of the above things I would suggest close the android studio and kill all Java/JVM related processes(or restart your system). One of the reasons for this error is way too many Java processes are running and GC is not able to run properly. Now open your android studio and try building it again if it still doesn't work you can increase the heap size as mentioned in earlier answers. – Abhishek Oct 07 '19 at 12:52
  • I ended up having to use this option for a maven build that ate up around 4G of memory. I tried increasing the heap size with -Xmx8192M, but this flag is the only thing that worked. – Trenton Telge Oct 22 '21 at 14:09
  • My jenkins slave is using : `java -Xmx50G -jar slave.jar` still facing the issue. Any help here? – Azee77 Jan 06 '23 at 09:54
108

If you are sure there are no memory leaks in your program, try to:

  1. Increase the heap size, for example -Xmx1g.
  2. Enable the concurrent low pause collector -XX:+UseConcMarkSweepGC.
  3. Reuse existing objects when possible to save some memory.

If necessary, the limit check can be disabled by adding the option -XX:-UseGCOverheadLimit to the command line.

Vitalii Fedorenko
  • 110,878
  • 29
  • 149
  • 111
  • 11
    I disagree with the third advice. Reuse existing objects do not save memory (do not leak old objects save memory :-) Moreover "reuse existing object" was a practice to relieve GC pressure. But it's NOT ALWAYS a good idea: with modern GC, we should avoid situations where old objects hold new ones because it can break some locality assumptions... – mcoolive Jul 18 '17 at 08:52
  • @mcoolive: For a somewhat contrived example, see the comments to answer https://stackoverflow.com/a/5640498/4178262 below; creating the `List` object inside the loop caused GC to be called 39 times instead of 22 times. – Mark Stewart Nov 15 '19 at 19:12
57

It's usually the code. Here's a simple example:

import java.util.*;

public class GarbageCollector {

    public static void main(String... args) {

        System.out.printf("Testing...%n");
        List<Double> list = new ArrayList<Double>();
        for (int outer = 0; outer < 10000; outer++) {

            // list = new ArrayList<Double>(10000); // BAD
            // list = new ArrayList<Double>(); // WORSE
            list.clear(); // BETTER

            for (int inner = 0; inner < 10000; inner++) {
                list.add(Math.random());
            }

            if (outer % 1000 == 0) {
                System.out.printf("Outer loop at %d%n", outer);
            }

        }
        System.out.printf("Done.%n");
    }
}

Using Java 1.6.0_24-b07 on a Windows 7 32 bit.

java -Xloggc:gc.log GarbageCollector

Then look at gc.log

  • Triggered 444 times using BAD method
  • Triggered 666 times using WORSE method
  • Triggered 354 times using BETTER method

Now granted, this is not the best test or the best design but when faced with a situation where you have no choice but implementing such a loop or when dealing with existing code that behaves badly, choosing to reuse objects instead of creating new ones can reduce the number of times the garbage collector gets in the way...

StackzOfZtuff
  • 2,534
  • 1
  • 28
  • 25
Mike
  • 1,390
  • 1
  • 12
  • 17
  • 14
    Please clarify: When you say "Triggered n times", does that mean that a regular GC happened n times, or that the "GC overhead limit exceeded" error reported by the OP happened n times? – Jon Schneider Apr 05 '12 at 15:15
  • I tested just now using java 1.8.0_91 and never got an error/exception, and the "Triggered n times" was from counting up the number of lines in the `gc.log` file. My tests show much fewer times overall, but fewest "Triggers" times for BETTER, and now, BAD is "badder" than WORST now. My counts: BAD: 26, WORSE: 22, BETTER 21. – Mark Stewart Nov 13 '19 at 19:41
  • I just added a "WORST_YET" modification where I define the `List list` in the *outer loop* instead of _before_ the outer loop, and Triggered 39 garbage collections. – Mark Stewart Nov 13 '19 at 19:50
39

Cause for the error according to the Java [8] Platform, Standard Edition Troubleshooting Guide: (emphasis and line breaks added)

[...] "GC overhead limit exceeded" indicates that the garbage collector is running all the time and Java program is making very slow progress.

After a garbage collection, if the Java process is spending more than approximately 98% of its time doing garbage collection and if it is recovering less than 2% of the heap and has been doing so far the last 5 (compile time constant) consecutive garbage collections, then a java.lang.OutOfMemoryError is thrown. [...]

  1. Increase the heap size if current heap is not enough.
  2. If you still get this error after increasing heap memory, use memory profiling tools like MAT ( Memory analyzer tool), Visual VM etc and fix memory leaks.
  3. Upgrade JDK version to latest version ( 1.8.x) or at least 1.7.x and use G1GC algorithm. . The throughput goal for the G1 GC is 90 percent application time and 10 percent garbage collection time
  4. Apart from setting heap memory with -Xms1g -Xmx2g , try

    -XX:+UseG1GC -XX:G1HeapRegionSize=n -XX:MaxGCPauseMillis=m  
    -XX:ParallelGCThreads=n -XX:ConcGCThreads=n
    

Have a look at some more related questions regarding G1GC

StackzOfZtuff
  • 2,534
  • 1
  • 28
  • 25
Ravindra babu
  • 37,698
  • 11
  • 250
  • 211
32

Just increase the heap size a little by setting this option in

Run → Run Configurations → Arguments → VM arguments

-Xms1024M -Xmx2048M

Xms - for minimum limit

Xmx - for maximum limit

randers
  • 5,031
  • 5
  • 37
  • 64
chopss
  • 771
  • 9
  • 19
16

try this

open the build.gradle file

  android {
        dexOptions {
           javaMaxHeapSize = "4g"
        }
   }
ali ozkara
  • 5,425
  • 2
  • 27
  • 24
  • Works great for the simulator. Any idea how this affects real devices? i.e. is this a good idea or is it just masking the issue? Thanks. – Joshua Pinter Feb 26 '17 at 18:16
13

For me, the following steps worked:

  1. Open the eclipse.ini file
  2. Change

    -Xms40m
    -Xmx512m
    

    to

    -Xms512m
    -Xmx1024m
    
  3. Restart Eclipse

See here

Sunil Kumar Sahoo
  • 53,011
  • 55
  • 178
  • 243
13

The following worked for me. Just add the following snippet:

android {
        compileSdkVersion 25
        buildToolsVersion '25.0.1'

defaultConfig {
        applicationId "yourpackage"
        minSdkVersion 10
        targetSdkVersion 25
        versionCode 1
        versionName "1.0"
        multiDexEnabled true
    }
dexOptions {
        javaMaxHeapSize "4g"
    }
}
Hoshouns
  • 2,420
  • 23
  • 24
  • Yes, when using Gradle :) – Alex Mar 17 '17 at 09:52
  • 4
    How could you even think this is a solution to his question *in general*? You set your heap size to 4g which is totally arbitrary in a gradle configuration for Android *facepalm*. – Julian L. Nov 16 '18 at 17:02
9

increase javaMaxHeapsize in your build.gradle(Module:app) file

dexOptions {
    javaMaxHeapSize "1g"
}

to (Add this line in gradle)

 dexOptions {
        javaMaxHeapSize "4g"
    }
saigopi.me
  • 14,011
  • 2
  • 83
  • 54
8

Solved:
Just add
org.gradle.jvmargs=-Xmx1024m
in
gradle.properties
and if it does not exist, create it.

reza_khalafi
  • 6,230
  • 7
  • 56
  • 82
6

You can also increase memory allocation and heap size by adding this to your gradle.properties file:

org.gradle.jvmargs=-Xmx2048M -XX\:MaxHeapSize\=32g

It doesn't have to be 2048M and 32g, make it as big as you want.

John Doe
  • 1,003
  • 12
  • 14
5

Java heap size descriptions (xms, xmx, xmn)

-Xms size in bytes

Example : java -Xms32m

Sets the initial size of the Java heap. The default size is 2097152 (2MB). The values must be a multiple of, and greater than, 1024 bytes (1KB). (The -server flag increases the default size to 32M.)

-Xmn size in bytes

Example : java -Xmx2m

Sets the initial Java heap size for the Eden generation. The default value is 640K. (The -server flag increases the default size to 2M.)

-Xmx size in bytes

Example : java -Xmx2048m

Sets the maximum size to which the Java heap can grow. The default size is 64M. (The -server flag increases the default size to 128M.) The maximum heap limit is about 2 GB (2048MB).

Java memory arguments (xms, xmx, xmn) formatting

When setting the Java heap size, you should specify your memory argument using one of the letters “m” or “M” for MB, or “g” or “G” for GB. Your setting won’t work if you specify “MB” or “GB.” Valid arguments look like this:

-Xms64m or -Xms64M -Xmx1g or -Xmx1G Can also use 2048MB to specify 2GB Also, make sure you just use whole numbers when specifying your arguments. Using -Xmx512m is a valid option, but -Xmx0.5g will cause an error.

This reference can be helpful for someone.

Phoenix
  • 1,470
  • 17
  • 23
2

To increase heap size in IntelliJ IDEA follow the following instructions. It worked for me.

For Windows Users,

Go to the location where IDE is installed and search for following.

idea64.exe.vmoptions

Edit the file and add the following.

-Xms512m
-Xmx2024m
-XX:MaxPermSize=700m
-XX:ReservedCodeCacheSize=480m

That is it !!

Du-Lacoste
  • 11,530
  • 2
  • 71
  • 51
1

I'm working in Android Studio and encountered this error when trying to generate a signed APK for release. I was able to build and test a debug APK with no problem, but as soon as I wanted to build a release APK, the build process would run for minutes on end and then finally terminate with the "Error java.lang.OutOfMemoryError: GC overhead limit exceeded". I increased the heap sizes for both the VM and the Android DEX compiler, but the problem persisted. Finally, after many hours and mugs of coffee it turned out that the problem was in my app-level 'build.gradle' file - I had the 'minifyEnabled' parameter for the release build type set to 'false', consequently running Proguard stuffs on code that hasn't been through the code-shrinking' process (see https://developer.android.com/studio/build/shrink-code.html). I changed the 'minifyEnabled' parameter to 'true' and the release build executed like a dream :)

In short, I had to change my app-level 'build.gradle' file from: //...

buildTypes {
    release {
        minifyEnabled false
        proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
        signingConfig signingConfigs.sign_config_release
    }
    debug {
        debuggable true
        signingConfig signingConfigs.sign_config_debug
    }
}

//...

to

    //...

buildTypes {
    release {
        minifyEnabled true
        proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
        signingConfig signingConfigs.sign_config_release
    }
    debug {
        debuggable true
        signingConfig signingConfigs.sign_config_debug
    }
}

//...
1

you can try to make changes on the server setting by referring to this image and increase the memory size for processing process changes highlighted in yellow

you can also make changes to java heap by opening cmd-> set _java_opts -Xmx2g
2g(2gigabytes) depending upon the complexity of your program

try to use less constant variable and temp variables

enter image description here

Rahul Jain
  • 390
  • 6
  • 20
1

I got this error while working with Oracle web logic server. I am sharing my answer for reference in case someone end up here looking for the solution.

So, if you are trying to up the Oracle web logic server and got this error then you just have to increase the initial and maximum heap size set for running the server.

Go to - > C:\Oracle\Middleware\Oracle_Home\user_projects\domains\wl_server\bin

open setDomainEnv.cmd

check set USER_MEM_ARGS value , if its less then

set USER_MEM_ARGS="-Xms128m – Xmx8192m ${MEM_DEV_ARGS} ${MEM_MAX_PERM_SIZE}"

This means that your intital heap size is set to 128 MB and max heap size is 8GB. Now , just save the file and restart the server. if it didn't resolve the issue, try increasing the size or look for ways to optimizing the service.

for ref , check this link : https://docs.oracle.com/cd/E49933_01/server.770/es_install/src/tins_postinstall_jvm_heap.html

edit: Check whether you are able to see the updated java args while running the server . just like this enter image description here If its coming as before then replace the shown value from setDoaminEnv.cmd by simple search and replace.

0

You need to increase the memory size in Jdeveloper go to setDomainEnv.cmd.

set WLS_HOME=%WL_HOME%\server    
set XMS_SUN_64BIT=**256**
set XMS_SUN_32BIT=**256**
set XMX_SUN_64BIT=**3072**
set XMX_SUN_32BIT=**3072**
set XMS_JROCKIT_64BIT=**256**
set XMS_JROCKIT_32BIT=**256**
set XMX_JROCKIT_64BIT=**1024**
set XMX_JROCKIT_32BIT=**1024**

if "%JAVA_VENDOR%"=="Sun" (
    set WLS_MEM_ARGS_64BIT=**-Xms256m -Xmx512m**
    set WLS_MEM_ARGS_32BIT=**-Xms256m -Xmx512m**
) else (
    set WLS_MEM_ARGS_64BIT=**-Xms512m -Xmx512m**
    set WLS_MEM_ARGS_32BIT=**-Xms512m -Xmx512m**
)

and

set MEM_PERM_SIZE_64BIT=-XX:PermSize=**256m**
set MEM_PERM_SIZE_32BIT=-XX:PermSize=**256m**

if "%JAVA_USE_64BIT%"=="true" (
    set MEM_PERM_SIZE=%MEM_PERM_SIZE_64BIT%
) else (
    set MEM_PERM_SIZE=%MEM_PERM_SIZE_32BIT%
)

set MEM_MAX_PERM_SIZE_64BIT=-XX:MaxPermSize=**1024m**
set MEM_MAX_PERM_SIZE_32BIT=-XX:MaxPermSize=**1024m**
Tunaki
  • 132,869
  • 46
  • 340
  • 423
shashi
  • 400
  • 3
  • 8
0

In Netbeans, it may be helpful to design a max heap size. Go to Run => Set Project Configuration => Customise. In the Run of its popped up window, go to VM Option, fill in -Xms2048m -Xmx2048m. It could solve heap size problem.

Ravindra babu
  • 37,698
  • 11
  • 250
  • 211
htlbydgod
  • 330
  • 2
  • 8
0

I don't know if this is still relevant or not, but just want to share what worked for me.

Update kotlin version to latest available. https://blog.jetbrains.com/kotlin/category/releases/

and it's done.

androidStud
  • 532
  • 4
  • 9
0

@Buhb I reproduced this by this in an normal spring-boot web application within its main method. Here is the code:

public static void main(String[] args) {
    SpringApplication.run(DemoServiceBApplication.class, args);
    LOGGER.info("hello.");
    int len = 0, oldlen=0;
    Object[] a = new Object[0];
    try {
        for (; ; ) {
            ++len;
            Object[] temp = new Object[oldlen = len];
            temp[0] = a;
            a = temp;
        }
    } catch (Throwable e) {
        LOGGER.info("error: {}", e.toString());
    }
}

The sample code that caused an come is also from oracle java8 language specifications.

spike
  • 15
  • 5
-6

Rebooting my MacBook fixed this issue for me.

Thomas
  • 5,810
  • 7
  • 40
  • 48