9

We have a web application deployed on a tomcat server. There are certain scheduled jobs which we run, after which the heap memory peaks up and settles down, everything seems fine. However the system admin is complaining that memory usage ('top' on Linux ) keeps increasing the more the scheduled jobs are. Whats the co-relation between heap memory and memory of the CPU? Can it be controlled by any JVM settings? I used JConsole to monitor the system.
I forced the garbage collection through JConsole and the heap usage came down, however the memory usage on Linux remained high and it never decreased.

Any ideas or suggestions would of great help?

user546352
  • 121
  • 1
  • 2
  • 9
  • The server is apparently a 64 bit server. – user546352 Feb 04 '11 at 00:02
  • Do you know what JVM the server is running, and what the difference was between the heap memory usage and the actual memory usage after forcing the garbage collection? – Tim Stone Feb 04 '11 at 00:06
  • >>however the memory usage on Linux remained high and it never decreased. << Which is "the memory usage"? The GC usually doesn't like to return memory to the system. – bestsss Feb 04 '11 at 00:43

3 Answers3

10

The memory allocated by the JVM process is not the same as the heap size. The used heap size could go down without an actual reduction in the space allocated by the JVM. The JVM has to receive a trigger indicating it should shrink the heap size. As @Xepoch mentions, this is controlled by -XX:MaxHeapFreeRatio.

However the system admin is complaining that memory usage ('top' on Linux ) keeps increasing the more the scheduled jobs are [run].

That's because you very likely have some sort of memory leak. System admins tend to complain when they see processes slowly chew up more and more space.

Any ideas or suggestions would of great help?

Have you looked at the number of threads? Is you application creating its own threads and sending them off to deadlock and wait idly forever? Are you integrating with any third party APIs which may be using JNI?

Tim Bender
  • 20,112
  • 2
  • 49
  • 58
4

What is likely being observed is the virtual size and not the resident set size of the Java process(es)? If you have a goal for a small footprint, you may want to not include -Xms or any minimum size on the JVM heap arguments and adjust the 70% -XX:MaxHeapFreeRatio= to a smaller number to allow for more aggressive heap shrinkage.

In the meantime, provide more detail as to what was observed with the comment the Linux memory never decreased? What metric?

Jé Queue
  • 10,359
  • 13
  • 53
  • 61
  • With 'top' command the %MEM remains high though and continues to go up as more jobs are scheduled, the %CPU comes down and the heap seems alright. – user546352 Feb 04 '11 at 00:54
  • %MEM is the % of RES in proportion to primary memory, so that generally reports the resident size in primary. What is the `-Xms` and `-Xmx` options? Also calculate the % of RES/VIRT and % RES/SHR and report back. It may report high but the process(es) may have a good portion shared. – Jé Queue Feb 04 '11 at 01:45
  • I assumed SHR is in kilobytes and this is how it showing. RES/VIRT = 56% and RES/SHR = 117.79 this is the top output with 'M' option. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14425 root 17 0 1990m 1.1g 9792 S 0.0 54.5 23:01.62 java This is after I made the change you suggested of -XX:MaxHeapFreeRatio, it seems to behave slightly better, I am still running some tests. I noticed that CPU% is exceeding 100% when the jobs are being run... – user546352 Feb 04 '11 at 16:42
  • Which means that a little less than half of the memory is actually sitting in primary, the rest is either unused or paged out. If CPU is sufficient (you say it is) and given the nature of the GC traversing most of the heap, I would bet you really are only using that % of allocation. And less than 1% shared so negligible here. – Jé Queue Feb 04 '11 at 16:45
  • @user546352 - It is perfectly normal to exceed 100%CPU (%CPU is an bad metric really) for multi-process and -threaded apps (look at your load averages instead). To what did you set your maxheapfreeratio? – Jé Queue Feb 04 '11 at 16:47
  • The values of xmx is set to 1280 and xms is left to default value(not set to any number) – user546352 Feb 04 '11 at 16:49
  • I set it to 40, just to see if there is any impact. – user546352 Feb 04 '11 at 16:49
  • As I am scheduling more and more jobs, the Residence memory is increasing its at 1.5g right now (the job is still running). The other parameters remain the same,so %RES/VIRT = 77.5% and RES/SHR is still negligible. I noticed that at one point MEM % increased more than 83 and then it came down to 77.5%. Do you think there is a possibility that I can bring it further down with any other paramters? Thank you for all the help . – user546352 Feb 04 '11 at 18:50
  • @user546352 - the JVM is just using what is given to it. You'll need to tell it to use less if you want it to use less, whilst keeping GC% overhead in check. Contact me via my profile if you want to discuss in further detail. – Jé Queue Feb 04 '11 at 19:26
0

You can use -Xmx and -Xms settings to adjust the size of the heap. With tomcat you can set an environment variable before starting:

export JAVA_OPTS=”-Xms256m -Xmx512m”

This initially creates a heap of 256MB, with a max size of 512MB.

Some more details: http://confluence.atlassian.com/display/CONF25/Fix+'Out+of+Memory'+errors+by+increasing+available+memory

wmacura
  • 799
  • 8
  • 12