0

I have an application that writes some data (about 15mb in 80k tuples) into an SQLite database using this jdbc-driver on Mac OS X. This is done using transactions, the largest of which contains about 45k inserts into one table. When profiling the application, several things seems strange:

  1. If I pause the application right at the beginning using System.in.read(), the memory allocated by the process keeps growing slowly. Why is that?
  2. When the application runs, the heap space used is always at around 80mb in the VisualVM monitor. However, when profiling memory usage, I get a total of about 10mb. Can anyone explain this difference?

Thanks for any help.

trincot
  • 317,000
  • 35
  • 244
  • 286
Björn Pollex
  • 75,346
  • 28
  • 201
  • 283

2 Answers2

1

The jigsaw pattern in memory usage is due to the profiling results being transmitted over RMI. This is indeed very confusing and annoying to filter from real memory allocations by your program. See VisualVM profiling is polluting results to find out how to filter these :-)

Community
  • 1
  • 1
parasietje
  • 1,529
  • 8
  • 36
  • I think you mean ["sawtooth,"](http://en.wikipedia.org/wiki/Sawtooth_wave) not ["jigsaw."](http://i.stack.imgur.com/5tkFs.png) – Matt Ball Apr 11 '12 at 15:26
  • please answer my question at http://stackoverflow.com/questions/20112666/how-to-interpret-profiling-results – J888 Nov 29 '13 at 00:54
  • No, the sawtooth pattern is memory being allocated until it hits a threshold that triggers another GC cycle. The question you're linking to observes that the application spikes memory usage because it's sending lots of data to VisualVM, which will make the spikes steeper and faster, but the sawtooth pattern happens even if you monitor an application that is not sending lots of data to VisualVM. – toolforger Feb 28 '21 at 18:51
0

With regards to your first issue, how long of a time slice did you observe the slow growth over. When memory usage is quiescent in a Java process you'll typically see a sawtooth pattern develop. Did you see any GC's occur in the same time slice? If not, then thats more evidence that supports this idea.

For problem number two, it's really hard to say for certain without more information. You would typically expect the application behaivor to differ when profiling is turned on because timing windows change, the application has to spend time reporting data and doing its normal work, etc. It could be that when profiling is turned on, more memory allocations happen because your code is now instrumented, and this triggers a GC which lowers the heap usage. Try doing a System.gc() in your application when profiling is turned off and tell us what your heap usage reports.

Matt Ball
  • 354,903
  • 100
  • 647
  • 710
Amir Afghani
  • 37,814
  • 16
  • 84
  • 124
  • With or without profiling, the application always runs into an out-of-memory error, so I guess the high heap-usage is correct. – Björn Pollex May 11 '10 at 17:03
  • OK, so the next step is to take a heap dump and interpret the results. You can do this with VisualVM or JConsole. Open the resulting heap dump file using Eclipse MAT or HPjmeter and see whats being kept in memory. – Amir Afghani May 11 '10 at 17:10
  • please answer my question at http://stackoverflow.com/questions/20112666/how-to-interpret-profiling-results – J888 Nov 29 '13 at 00:52