8

If, on purpose, I create an application that crunches data while suffering from memory-leaks, I can notice that the memory as reported by, say:

Runtime.getRuntime().freeMemory()

starts oscillating between 1 and 2 MB of free memory.

The application then enters a loop that goes like this: GC, processing some data, GC, etc. but because the GC happens so often, the application basically isn't doing much else anymore. Even the GUI takes age to respond (and, no, I'm not talking about EDT issues here, it's really the VM basically stuck in some endless GC'ing mode).

And I was wondering: is there a way to programmatically detect that the JVM doesn't have enough memory anymore?

Note that I'm not talking about ouf-of-memory errors nor about detecting the memory leak itself.

I'm talking about detecting that an application is running so low on memory that it is basically calling the GC all the time, leaving hardly any time to do something else (in my hypothetical example: crunching data).

Would it work, for example, to repeatedly read how much memory is available during, say, one minute, and see that if the number has been "oscillating" between different values all below, say, 4 MB, conclude that there's been some leak and that the application has become unusable?

SyntaxT3rr0r
  • 27,745
  • 21
  • 87
  • 120
  • 2
    I think a better idea would be to fix your code. – San Jacinto Nov 24 '10 at 23:40
  • 3
    @San Jacinto: the shortsightedness of your comment is appealing and doesn't contribute anything to SO. You probably want to re-read the question and answer in case you have anything valuable to provide on SO. – SyntaxT3rr0r Nov 24 '10 at 23:43
  • 3
    Only when it's your code that is acting up. Not all third party software is created equal, not by a long shot. I thought there was a GC hook you could plug into... let me see... – Mark Storer Nov 24 '10 at 23:44
  • 1
    @Mark Storer: exactly... Moreover if only for the theory to me it's an interesting question. But I expected comments like the one San Jacinto made to pop-up, this is SO after all ;) – SyntaxT3rr0r Nov 24 '10 at 23:46
  • 3
    @webinator: I find your consideration of shortsightedness as appealing to be appalling. :) – Paul Sonier Nov 24 '10 at 23:48
  • @Webinator Honestly, it was a joke. I will remove it if you'd like. Please let me know. – San Jacinto Nov 24 '10 at 23:48
  • @Webinator - How is it a memory leak if the system is able to reclaim it? The JVM is just trying to do its best within the parameters you set for it - your options in this case are to up the allowed memory or switch your garbage collection algorithm. – CurtainDog Nov 24 '10 at 23:49
  • Nope. No such animal. At least none that I could find after poking around the javadocs for a few minutes. – Mark Storer Nov 24 '10 at 23:52
  • @San Jacinto: no, no worries, no need to remove it's funny and my non-native-english *appealing*/*apalling* SNAFU made it even funnier :) – SyntaxT3rr0r Nov 25 '10 at 22:40

7 Answers7

3

And I was wondering: is there a way to programmatically detect that the JVM doesn't have enough memory anymore?

I don't think so. You can find out roughly how much heap memory is free at any given instant, but AFAIK you cannot reliably determine when you are running out of memory. (Sure, you can do things like scraping the GC log files, or trying to pick patterns in the free memory oscillations. But these are likely to be unreliable and fragile in the face of JVM changes.)

However, there is another (and IMO better) approach.

In recent versions of Hotspot (version 1.6 and later, I believe), you can tune the JVM / GC so that it will give up and throw an OOME sooner. Specifically, the JVM can be configured to check that:

  • the ratio of free heap to total heap is greater than a given threshold after a full GC, and/or
  • the time spent running the GC is less than a certain percentage of the total.

The relevant JVM parameters are "UseGCOverheadLimit", "GCTimeLimit" and "GCHeapFreeLimit". Unfortunately, Hotspot's tuning parameters are not well documented on the public web, but these ones are all listed here.

Assuming that you want your application to do the sensible thing ... give up when it doesn't have enough memory to run properly anymore ... then just launch the JVM with a smaller "GCTimeLimitor" or "GCHeapFreeLimit" than the defaults.

EDIT

I've discovered that the MemoryPoolMXBean API allows you to look at the peak usage of individual memory pools (heaps), and set thresholds. However, I've never tried this, and the APIs have lots of hints that suggest that not all JVMs implement the full API. So, I would still recommend the HotSpot tuning option approach (see above) over this one.

Stephen C
  • 698,415
  • 94
  • 811
  • 1,216
2

You can use getHeapMemoryUsage.

Martin v. Löwis
  • 124,830
  • 17
  • 198
  • 235
  • However, this does not give you the information that you need to know to avoid GC thrashing; i.e. how much memory was free / in-use immediately after the last full GC. For that you'd need to use the threshold / notification methods of MemoryPoolMXBean ... if supported. – Stephen C Nov 25 '10 at 07:13
1

I see two attack vectors.

Either monitor your memory consumption.

When you more or less constantly use lots of the available memory it is very likely that you have a memory leak (or are just using too much memory). The vm will constantly try to free some memory without much success => constant high memory usage.

You need to distinguish that from a large zigzag pattern which happens often without being an indicator of memory problem. Basically you use more an more memory, but when gc finds time to do its job it finds lots of garbage to bring out, so everything is fine.

The other attack vector is to monitor how often and what kind of success the gc runs. If it runs often with only small gains in memory, it is likely you have a problem.

I don't know if you can access this kind of information directly from your program. But if nothing else I think you can specify parameters on startup which makes the gc log information into a file which in turn could get parsed.

Jens Schauder
  • 77,657
  • 34
  • 181
  • 348
  • +1... About the large zigzag, that's why I was thinking to only consider there's an issue if all the readings are all very close and all giving less than 'X' MB less. – SyntaxT3rr0r Nov 24 '10 at 23:59
1

What you could do is spawn a thread that wakes up periodically and calculates the amount of used memory and records the result. Then you can do regression analysis on the result to estimate the rate of memory growth in your application. If you know the rate of growth, and the maximum amount of memory, you can predict (with some confidence) when your application will run out of memory.

Amir Afghani
  • 37,814
  • 16
  • 84
  • 124
  • this is a very nice idea too. I'm not sure the growth is linear that said, but I like this idea a lot. – SyntaxT3rr0r Nov 25 '10 at 00:07
  • Growth will most likely not be linear :) - but I know from experience that if this is implemented well, the results are meaningful. – Amir Afghani Nov 25 '10 at 00:11
0

i've been using plumbr for memory leak detection and it's been a great experience, though the licence is very expensive: http://plumbr.eu/

bsautner
  • 4,479
  • 1
  • 36
  • 50
0

You can pass arguments to your java virtual machine that gives you GC diagnostics such as

  1. -verbose:gc This flag turns on the logging of GC information. Available in all JVMs.

  2. -XX:+PrintGCTimeStamps Prints the times at which the GCs happen relative to the start of the application.

If you capture that output in a file, in your application you can periodcly read that file and parse it to know when the GC has happened. So you can work out the average time between every GC

hhafez
  • 38,949
  • 39
  • 113
  • 143
0

I think the JVM does exactly this for you and throws java.lang.OutOfMemoryError: GC overhead limit exceeded. So if you catch OutOfMemoryError and check for that message then you have what you want, don't you?

See this question for more details

Community
  • 1
  • 1
Persimmonium
  • 15,593
  • 11
  • 47
  • 78
  • oh no no no... In some cases you'll run into the scenario I described: GC, crunching some data, GC, crunching some data. Technically the JVM is still running but, practically, it is as slow as molasses than you might as well kill the app. Sure, in a lot of case you'll get the OOM but in some cases you'll get exactly what I described. – SyntaxT3rr0r Nov 25 '10 at 00:09