5

I have a critical process running in java (1.6), with a registered shutdown hook. In some instance where I get a OOM issue (more details below about the issue), the process stops suddenly, I don't get any of my logs, my catch(Throable x) is not catching the exception.

But the shutdown hook works. So if there was a way to know that the process is going to shutdown due to some nasty OOM, I could log necessary info before exiting.

Is there a way to do this?

About the OOM: Not sure what is the exception because as I said it does not get caught. I know it's a OOM because I start the process with

-XX:+HeapDumpOnOutOfMemoryError

and I get a heap dump file. In other cases an exception is caught, and it's a ava.lang.OutOfMemoryError: GC overhead limit exceeded. But not sure it's always this case.

EDIT:

In case it is not clear: I am not trying to prevent the OOM as it can happen for valid reasons in some scenario, I just want to make sure it is clear in the app log files

My question is: is it possible to find out process is shutting down due to an OOM while in the shutdown hook?

I need to do this programatically and from the same process.

For now the best approach is see if it exists a heap dump file java_pid_pid of process_.hprof (I know the pid) with recent date and deduce there was an OOM. I guess I could try Runtime.getRuntime().freeMemory() and report the issue if the memory available is very low, but not sure how reliable is that, maybe when the process is shutting down it has already released much memory, the approach above is best I think.

Persimmonium
  • 15,593
  • 11
  • 47
  • 78
  • So your question is: "What happens?" - right? – dacwe Nov 10 '10 at 15:27
  • Don't be angry with the question, but do you catch(Exception x) or catch(Throable x) as OOM is Throable, I'm sure you know that and it's probably not likely to be caught if it happens, but still worth checking... – Eran Medan Nov 10 '10 at 16:28
  • as I mention (maybe it's not clear, I'll edit again) I have a catch(Throable x) does is not having effect. thanks – Persimmonium Nov 10 '10 at 16:34

7 Answers7

3

OOMs are tricky because if JVM is out of memory it might not run exception handling code due to a new OOM being thrown.

Try setting default uncaught exception handler. It will catch all uncaught exceptions.

Peter Knego
  • 79,991
  • 11
  • 123
  • 154
  • This is new to me. Do you have references? In my experience, OOME is a normal error thrown, propagated and catched like any error. It could of course happen that during handling the error, another OOME occurs. – Christian Semrau Nov 10 '10 at 20:10
  • Exactly. I was not clear enough - it could throw a new OOM and not run the code one intended. I edited the post. – Peter Knego Nov 10 '10 at 20:49
2

You can probably run another processe that monitor the log file for OOE (or monitor if the process is killed) and then restart the process.

Perhaps putting your app as Unix deamon or Windows service will be more appropriate.

But, what about investigating the memory leak with profiling tools instead ?

jvisualvm is a good one

Alois Cochard
  • 9,812
  • 2
  • 29
  • 30
2

You might want to look into the -XX:OnOutOfMemoryError="cmd_with_pid_arg %p" option (the command string is similar to -XX:OnError).

kschneid
  • 5,626
  • 23
  • 31
1

Use monitoring tools like jvisualvm or jconsole.

dacwe
  • 43,066
  • 12
  • 116
  • 140
0

You can (technically) catch OutOfMemoryErrors, but it's not sure that you'll be able to execute the code in the catch block, if there's no memory left.

Maybe it's worth a try to (1) catch the OOM, (2) trigger garbage collection (System.gc()) and try to write something to log or console. No guarantee but it won't break anything.

Andreas Dolk
  • 113,398
  • 19
  • 180
  • 268
  • As far as I understand, System.gc() should be automatically called before the OOM, the problem is that after or during the gc the OOM still occures – Eran Medan Nov 10 '10 at 16:29
0

You should solve the problem rather than attempt to compensate for it.

The heap dump will show you the object types that are consuming the most memory. You should be able to figure out where those objects are allocated, or why they're remaining after they should be discarded.

As for the specific error that you're receiving, take a look at this SO question: Error java.lang.OutOfMemoryError: GC overhead limit exceeded -- it seems that the simplest solution will be to increase your heap size.

Community
  • 1
  • 1
Anon
  • 2,654
  • 16
  • 10
  • the problem is here that the logs don't show nothing about the OOM issue, so I am trying to solve it. The OOM can happen as some valid use cases and I just need to signal it in log files. – Persimmonium Nov 10 '10 at 16:28
  • @raticulin - No, OOM can *not* happen in "valid" use cases. In any app. If you have use cases that can exceed the amount of memory available, you need to recognize that before actually running out of memory. Perhaps you should ask a question about how to achieve that ... – Anon Nov 10 '10 at 16:50
  • And a hint: Soft References are one way to write code that gracefully handles a "need too much memory" situation. – Anon Nov 10 '10 at 16:52
0

Again, using jvisualvm (JDK 6, in the bin folder) as suggeste by others or other profiling tools is the best way to solve the issue rather than handle it, but assuming you will seperately investigate the OOM causes, and try to eradicate them, I would consider doing the following POC, (see also Alois answer)

How about running a Java process that will wrap the call of the OOM throwing process?

You can catch whatever result is being sent to the output stream of the called process, and see if there is a consistent exit code / stack trace you can use to identify OOMs

I'm sure there are more approaches, but this seems to me like a good, programmatic starting point

Eran Medan
  • 44,555
  • 61
  • 184
  • 276