-2

In one of our java application we have got OutOfMemoryError:GC Overhead limit exceeded.

We have used HashMaps in someplaces for storing some data.From logs we can I identify that its reproducing at the same place.

I wanted to ask if Garbage Collector spends more time in clearing up the hashmaps?

Upon looking at the code( i cant share here ), I have found that that there is a Hashmap created like

Hashmap topo = new HashMap();

but this hashmap is never used. Is this a kind of memory leak in my application ?

If this Hashmap is created inside a method which is doing some processing and it is not used elsewhere also this method is accessed my multiple threads say 20 .Then in such a case would it impact,creating Hashmap as above, Garbage collector to spend more time in recovering heap and throw OOME.

Please let me know if you need some more details.

rahulserver
  • 10,411
  • 24
  • 90
  • 164
Saurav
  • 141
  • 1
  • 1
  • 6
  • 2
    If the `topo` reference doesn't escape the scope in which it was declared then no, it does not contribute to the creation of a memory leak. – Theodoros Chatzigiannakis Jul 15 '13 at 10:46
  • Will garbage collector face problems in clearing these Hashmaps?If suppose there are large number of these hashmaps – Saurav Jul 15 '13 at 10:50
  • 2
    Use a profiler to look at memory consumption or increase you maximum memory size. If you don't measure your program you are just guessing. Using lots and lots of HashMaps is not good for performance/memory but it may not be your biggest problem. – Peter Lawrey Jul 15 '13 at 11:01
  • I think you need to have a look at the contents of the heap so you can see where the leak is happening. In a big system it's generally _really_ hard to find a leak from reading the code. Memory analysis tools will at least point you in the right direction. – DaveH Jul 15 '13 at 11:05
  • @DaveHowes I collected the heapDump on OOME and found that LinkedBlockingQueue that we have used in the ThreadpoolExecutor is occupying the 93% of the total heap allocated to the process.total heap allocated is -Xmx3500m.Is this happening because the tasks are produced much faster than the threads in the pool executing it?Could this java bug would be responsible for this : bugs.sun.com/view_bug.do?bug_id=6806875 .How to handle such a situation ?? – Saurav Aug 29 '13 at 07:10

3 Answers3

0

n one of our java application we have got OutOfMemoryError:GC Overhead limit exceeded. We have used HashMaps in someplaces for storing some data.From logs we can I identify that its reproducing at the same place.

If the Hashmap is simply ever building and most likely marked as static, which means you keep adding things to this hashmap and never delete. Then one fine day it will lead to OutOfMemoryError.

I wanted to ask if Garbage Collector spends more time in clearing up the hashmaps?

Garbage collector spends time on the objects which are not referenced, weakly referenced, soft referenced. Wherever it find such objects, depending on the need it will clear them.

Upon looking at the code( i cant share here ), I have found that that there is a Hashmap created like Hashmap topo = new HashMap(); , but this hashmap is never used. Is this a kind of memory leak in my application ?

if this Hashmap is created inside a method which is doing some processing and it is not used elsewhere also this method is accessed my multiple threads say 20 . Then in such a case would it impact,creating Hashmap as above, Garbage collector to spend more time in recovering heap and throw OOME.

If it is hashmap local to a methid, and the method exits after doing some processing, then it should be garbage collected as soon as method exits. As the hashmap is local to the method, so each thread will have a separate copy of this map and once thread finishes the method execution, map is eligible for GC.

Community
  • 1
  • 1
Juned Ahsan
  • 67,789
  • 12
  • 98
  • 136
  • Thanks...Suppose if there are large number of such unused Hashmaps , Will garbage Collector spend some time in recovering Heap and is it possible that it will throw OOME. – Saurav Jul 15 '13 at 10:53
  • @Saurav Simple references of empty hashmaps will hardly eat substantial heap space. – Juned Ahsan Jul 15 '13 at 10:54
  • Short answer is, you're leaking elsewhere. You will be holding or accumulating actual data for the lifetime of the application.. a transitory HashMap, local to the method & not referenced outside, will not be the problem. – Thomas W Jul 15 '13 at 10:56
0

You need to look for long-lifetime objects & structures, which might be the actual problem, rather than wildly grasping at some clueless manager's idea of potential problem.

See:

Look out especially for static/ or application-lifetime Maps or Lists, which are added to during the lifetime rather than just at initialization. It will most likely be one, or several, of these that are accumulating.

Note also that inner classes (Listeners, Observers) can capture references to their containing scope & prevent these from being GC'ed indefinitely.

Community
  • 1
  • 1
Thomas W
  • 13,940
  • 4
  • 58
  • 76
  • I collected the heapDump on OOME and found that LinkedBlockingQueue that we have used in the ThreadpoolExecutor is occupying the 93% of the total heap allocated to the process.total heap allocated is -Xmx3500m.Is this happening because the tasks are produced much faster than the threads in the pool executing it?Could this java bug would be responsible for this : http://bugs.sun.com/view_bug.do?bug_id=6806875 .How to handle such a situation ?? – Saurav Aug 27 '13 at 10:22
  • That JDK bug shouldn't cause OOME -- it should just degrade performance and cause "full GC" when a "minor GC" would otherwise be sufficient. – Thomas W Aug 27 '13 at 10:50
  • Why don't you wrap the entry (addition) and exit (removal) points for the LinkedBlockingQueue, and count how many tasks in/out and queued? Log this & see if it's the case, and -- if so -- then you can look into *why* so many tasks are enqueued. – Thomas W Aug 27 '13 at 10:52
  • In order to solve the problem, i have used ArrayBlockingQueue(50) and there are max of 20 threads in the threadpool.Since I have used a bounded queue the subsequent task will get rejected so used RejectedExecutionHandler to handle the rejected tasks and default handler policy CallerRunsPolicy since i dont want to lose the task.I am looking forward to test this solution.Please provide your comments.Will it be helpful ? – Saurav Aug 29 '13 at 07:07
0

Please let me know if you need some more details.

You need some more details. You need to profile your application to see what objects are consuming the heap space.

Then, if some of the sizeable objects are no longer actually being used by your application, you have a memory leak. Look at the references to these objects to find out why they're still being held in memory when they're no longer useful, and then modify your code to no longer hold these references.

Alternatively, you may find that all of the objects in memory are what you would expect as your working set. Then either you need to increase the heap size, or refactor your application to work with a smaller working set (e.g. streaming events one at a time rather than reading an entire list; storing the last seesion details in the database rather than memory; etc.).

Andrzej Doyle
  • 102,507
  • 33
  • 189
  • 228
  • I collected the heapDump on OOME and found that LinkedBlockingQueue that we have used in the ThreadpoolExecutor is occupying the 93% of the total heap allocated to the process.total heap allocated is -Xmx3500m.Is this happening because the tasks are produced much faster than the threads in the pool executing it?Could this java bug would be responsible for this : http://bugs.sun.com/view_bug.do?bug_id=6806875 .How to handle such a situation ?? – Saurav Aug 27 '13 at 10:31
  • @Saurav Again, this is something that you'll have to look at, but it seems likely that you're adding tasks too fast. Are the tasks in that queue all pending execution? If so, then indeed you've generated 3.3GB worth of data that needs to be run in future but hasn't been yet. There's no simple answer in this case; either you throttle the speed at which tasks are created, increase the speed that they're processed, or increase the heap size. Which one you choose will depend on your requirements and the details of your situation. – Andrzej Doyle Aug 27 '13 at 14:16
  • In order to solve the problem, i have used ArrayBlockingQueue(50) and there are max of 20 threads in the threadpool.Since I have used a bounded queue the subsequent task will get rejected so used RejectedExecutionHandler to handle the rejected tasks and default handler policy CallerRunsPolicy since i dont want to lose the task.I am looking forward to test this solution.Please provide your comments.Will it be helpful ? – Saurav Aug 29 '13 at 07:05
  • 1
    @Saurav That really depends on your situation. The issue seems to be that you're generating tasks faster than they can be executed, which (if you don't lose any) will always cause you to run out of memory eventually. You need to throttle the speed that tasks are created - and "tying up" a producer by making them run the task *may* do this. However I don't think it's ideal for two reasons... – Andrzej Doyle Aug 29 '13 at 09:50
  • Firstly you have a thread pool that runs tasks - but now tasks *might* be run in completely arbitrary other threads. This makes reasoning about how your app behaves more difficult. Secondly, if producer threads are created on-demand, getting one producer to execute the task won't slow down the production of new tasks, which is what you really need. I would look at actually throttling task creation. (Or if you want to do something similar to your proposed solution, use a blocking `offer` call to put the task on the queue - it still slows down the producer but tasks are all run on the pool.) – Andrzej Doyle Aug 29 '13 at 09:53
  • 1
    @Saurav -- Account for the mismatched number/rate of tasks being generated & consumed first. Your problem is probably there. Otherwise you're putting a bucket under a leak, without checking why there's a hole in the roof. First rule of fixing: understand the problem properly & **identify the actual problem**. Your queue's not the problem, **why you're putting so much stuff in** is. – Thomas W Aug 30 '13 at 06:11
  • I'm voting your question down, since you seem to keep coming up with bitsy technical hacks & stupid crud without ever actually properly analyzing what your application is doing (which probably causes all these problems). -1. – Thomas W Aug 30 '13 at 06:15