0

I've set up a glassfish cluster with 1 DAS and 2 Node Agents.

The system has TimedObjects which are batched once a day. As glassfish architecture, there is only 1 cluster instance allowed to trigger timeout event of each Timer created by TimerService.

My problems is about Heap size of a cluster instance which triggers batch job. The VisualVM shows that one instance always has scalable heap size (increase when the server is loaded and decrease after that) but another one always has heap size at the maximum and never decrease.

It is acceptable to tell me that the heap size is at the maximum because the batch job is huge. But, the only question I have is why it does not decrease after the job is done???

VisualVM shows that the "Used Heap Memory" of the instance which triggers timeout event decreases after the batch job. But, why its "Heap Size" is not scaled down accordingly?

Thank you for your advice!!! ^^

trincot
  • 317,000
  • 35
  • 244
  • 286
tee4cute
  • 25
  • 5
  • [This question and the answers](http://stackoverflow.com/questions/324499/java-still-uses-system-memory-after-deallocation-of-objects-and-garbage-collectio) might prove useful. – Vineet Reynolds Oct 07 '11 at 06:52

2 Answers2

0

Presumably you have something referencing the memory. I suggest getting a copy of MAT and doing a heap dump. From there you can see what's been allocated and what is referencing it.

Preston
  • 3,273
  • 4
  • 26
  • 35
  • Could you suggest othet MAT tools for me? Because the heap dump is quite big (about 4GB). And it is on cloud server which is not accessible physically. So, it is impossible to download heap dump file to run on local machine. I've tried jhat to run the web server on remote server, but it is never started and halted after that. Some people report that jhat does not support big dump file. (My cloud server is CentOS without GUI installed. So I can't install GUI applications on it.) – tee4cute Oct 08 '11 at 12:15
  • Can you run a smaller version of your workload such that you create a smaller heap dump? Your cloud doesn't allow you to shell in or ftp? – Preston Oct 08 '11 at 16:45
  • Yes, it allow ftp. But, I think it is better if I can see the real data running on the server. I've ever tried to run the application on local machine and see the heap dump of it but I cannot get any useful information from it. So, the last chance may be download the heap dump file from remote server (this may take all day!!). I think there should be another MAT tool which is like jhat to run web server on remote machine. Thank you very much for your advance! I'll figure it out and post the result here for others. – tee4cute Oct 09 '11 at 04:09
  • Yes, do the heap dump on the cloud and then ftp the file to your local box for analysis. – Preston Oct 09 '11 at 22:26
  • I've downloaded the heap file and see it already. I dump the heap at the time that batch is not running. So, it is only around 200MB size. I think that my application has no leaks. But in VisualVM, it still shows that my application has "Heap Size" at 3.8GB but the "Used Heap" is at 200MB (<- equals to dump file). Is "Heap Size" the memory size allocated on OS? If yes, why JVM does not decrease the size down bcs it uses only around 200MB? – tee4cute Oct 10 '11 at 02:20
  • Heap size is the amount of memory you have allocated to the JVM. If you want to change it looks at the Xmx and Xms params. What you're describing is normal. – Preston Oct 10 '11 at 03:24
  • I know that I can configure Heap Size using Xmx. But, one thing I consider is why JVM does not decrease Heap Size to accord the Used Heap? I wonder why there is only one node which trigger batch job will have this behavior. Another node will have Heap Size accord to Used Heap. Or it may be JVM's Heap Size scaling algorithm to preserve the memory for huge batch job? – tee4cute Oct 10 '11 at 05:11
  • These might help: https://forums.oracle.com/forums/thread.jspa?threadID=2146433 and https://forums.oracle.com/forums/thread.jspa?threadID=2215214 – Preston Oct 10 '11 at 15:09
  • This a very very very well article which I've got it from the link you provided me (Thanks a lot Preston, you're so nice!!!) http://www.ibm.com/developerworks/java/library/j-nativememory-linux/index.html – tee4cute Oct 11 '11 at 16:22
  • I've finalized and answer my question in another post. But, I click accept your answer!! Thanks again. – tee4cute Oct 11 '11 at 16:32
0

This is the final answer (thanks Preston ^^)

From the article :

http://www.ibm.com/developerworks/java/library/j-nativememory-linux/index.html

I captured these statements to answer my question!

1 :

"Runtime environments (JVM) provide capabilities that are driven by some unknown user code; that makes it impossible to predict which resources the runtime environment will require in every situation"

2 : This is why the node which triggers batch job always consumes the memory at all time.

"Reserving native memory is not the same as allocating it. When native memory is reserved, it is not backed with physical memory or other storage. Although reserving chunks of the address space will not exhaust physical resources, it does prevent that memory from being used for other purposes"

3 : And this is why the node which does not trigger batch job has scalable Heap Size behavior.

"Some garbage collectors minimise the use of physical memory by decommitting (releasing the backing storage for) parts of the heap as the used area of heap shrinks."

tee4cute
  • 25
  • 5