10

What causes tomcat memory to grow and fall

124697
  • 22,097
  • 68
  • 188
  • 315
  • 2
    Since memory is cleared successfully this is normal. Tomcat obviously has periodic tasks running in the background that create objects and thus allocate memory. What you could do is monitor the application through VisualVM or other profiler tool to determine what objects are created periodically OR look through the source code. – Zpetkov Feb 12 '18 at 12:45
  • Which version jvm and tomcat are you using? Did you try on a fresh tomcat installation? – Mehmet Sunkur Feb 19 '18 at 23:52
  • @MehmetSunkur tried with Java 7 and tomcat 5.5 and with java 8 and tomcat 8.5 – 124697 Feb 19 '18 at 23:54
  • It's an interesting question. Apparently it is not a memory leak (as the memory returns back with GC). As you told "If I were to give the JVM more memory like 12gb then the zig zag patterns goes from 500mb to 3GB", I guess it is because GC is polite and does not want to bother the JVM if not needed. But what is tomcat doing in the idle time would be good to know. Have you tried to forward your question to the tomcat team? – Honza Zidek Feb 22 '18 at 08:47

5 Answers5

9

It is the jconsole itself that causes the zig zag pattern.

To see the effect of the jconsole or jvisualvm you can write a simple java program with just a main loop. E.g.

public static void main(String[] args) {
    while (true) {
        try {
            Thread.sleep(500);
        } catch (InterruptedException e) {
        }
    }
}

Execute it with only a little heap java -Xmx20m .... This will help you to better see heap utilization, because the tool jstat I will use next, prints the utilization in percent.

Now open a command line window and execute jstat. You will need the PID of the java process and can find it out using jps -l.

jstat -gcutil <PID>

it will print out something like this

  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
  0,00   0,00  69,34   0,00  14,48  17,19      0    0,000     0    0,000    0,000

Focus on the eden space E. The value is the percentage of the space's current capacity. Take a look at jstat for details.

If you execute the command again and again you will see that the eden space utilization will not grow. It keeps at e.g. 69,34. I use a linux watch like command on my windows to rerun the command at a specific interval. See watch.bat

Now open the jconsole

jconsole <PID>

and execute the jstat comand again and again. You will see that the eden space is continuously growing until the max is reached and it is garbage collected.

Here is my output of jstat --gcutil <PID> at an interval of 1 seconds. Focus on the eden space E.

  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
  0,00   0,00  67,42   0,00  14,48  17,19      0    0,000     0    0,000    0,000
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
  0,00   0,00  67,42   0,00  14,48  17,19      0    0,000     0    0,000    0,000
# jconsole connected
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
  0,00  96,88  81,64   7,57  92,26  84,87      1    0,001     0    0,000    0,001
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
  0,00  96,88  84,66   7,57  92,26  84,87      1    0,001     0    0,000    0,001
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
  0,00  96,88  89,70   7,57  92,26  84,87      1    0,001     0    0,000    0,001
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
  0,00  96,88  91,70   7,57  92,26  84,87      1    0,001     0    0,000    0,001
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
  0,00  96,88  93,70   7,57  92,26  84,87      1    0,001     0    0,000    0,001
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
  0,00  96,88  95,70   7,57  92,26  84,87      1    0,001     0    0,000    0,001
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
  0,00  96,88  96,70   7,57  92,26  84,87      1    0,001     0    0,000    0,001
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
  0,00  96,88  98,71   7,57  92,26  84,87      1    0,001     0    0,000    0,001
# Garbage collected
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
100,00   0,00   1,13  14,06  94,75  89,26      2    0,003     0    0,000    0,003
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
100,00   0,00   3,00  14,06  94,75  89,26      2    0,003     0    0,000    0,003
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
100,00   0,00   5,75  14,06  94,75  89,26      2    0,003     0    0,000    0,003

As you can see... After the jconsole connected to the process the eden space grows and grows until it is garbage collected. This causes the zig zag pattern. Here is a snapshot from jvisualvm.

enter image description here

You will discover a similar behavior when you do the same with your tomcat process. The only differnce will be that the eden space grows very slightly even if no jconsole is connected. But this slightly growth is not the reason for the zig zag pattern that you see.

Here is the jstat output for my tomcat.

  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
  0,00  99,80  70,33   4,81  96,85  90,10      5    0,037     0    0,000    0,037
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
  0,00  99,80  70,33   4,81  96,85  90,10      5    0,037     0    0,000    0,037
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
  0,00  99,80  70,43   4,81  96,85  90,10      5    0,037     0    0,000    0,037
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
  0,00  99,80  70,43   4,81  96,85  90,10      5    0,037     0    0,000    0,037
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
  0,00  99,80  70,43   4,81  96,85  90,10      5    0,037     0    0,000    0,037
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT

this output continues for 36 seconds until you can see the small memory change. From 70,43 to 70,53.

  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
  0,00  99,80  70,43   4,81  96,85  90,10      5    0,037     0    0,000    0,037
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
  0,00  99,80  70,53   4,81  96,85  90,10      5    0,037     0    0,000    0,037
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT     GCT
  0,00  99,80  70,53   4,81  96,85  90,10      5    0,037     0    0,000    0,037

Thus this small change is the only thing that tomcat's background processes are responsible for.

PS: You can also use the Memory Analyser and aquire heap dumps from time to time and you will see that amount of used memory for byte[] arrays increases, because of RMI network traffic.

René Link
  • 48,224
  • 13
  • 108
  • 140
  • Thanks for this answer. It's not letting me award you the full 500 i guess its because the bounty time is over. I thought it wouldn't make a different – 124697 Feb 28 '18 at 13:04
3

Here are important points to understand zig zag pattern explained in linked answer:

  • When the collector runs it first tries a partial collection only releases memory that was allocated recently
  • recently created objects that are still active get 'promoted'
  • Once an object has been promoted a few times, it will no longer get cleaned up by partial collections even after it is ready for collection
  • These objects, called tenured, are only cleaned up when a full collection becomes necessary in order to make enough room for the program to continue running

Question: What is causing the memory to grow like this even when the application is completely idle?

Answer: There are internal scheduled jobs like automations, timers, etc. or some external process monitoring that is causing growth of memory when the app is idle. Maybe your application has same number of objects but some could be larger. For example you might be have some ArryList and some StrigBuilder that keeps growing the cache.

Sources:

1

Let's see graph At about 12:08,Size of working memory is maximum (199 MB).

At about 12:09, Garbage collector is working, dead object are collected, size of working memory is about 145 MB.

At about 12:12, working memory increased, many dead objects are generated, memory increase from 145 MB --> 155 MB --> 165 MB --> 180 MB --> ... --> 190 MB.

At about 12:13,Garbage collector is working again, dead object are collected, size of working memory is about 145 MB.

...and so on. Virtually, it is a periodic graph, if no special event happen.

Vy Do
  • 46,709
  • 59
  • 215
  • 313
  • You can read more at https://www.javaworld.com/article/2073905/build-ci-sdlc/pick-up-performance-with-generational-garbage-collection.html?page=2 See Figure 7 – Vy Do Feb 13 '18 at 14:05
  • Any idea what causes the memory to grow without the app being used? Is it the jvm or tomcat? – 124697 Feb 13 '18 at 14:44
  • I checked with Apache Tomcat when I don't have any web application is running. It still increase, then decrease, then increase memory... it is garbage collector cycling is working. Tomcat is merely an example of Java application, you can use other Java application. – Vy Do Feb 13 '18 at 15:15
1

http://www.oracle.com/technetwork/java/gc-tuning-5-138395.html To start, here's a useful link on "Tuning Garbage Collection with the 5.0 Java TM Virtual Machine"

This does sound like GC pauses making tomcat unresponsive. One thing to start with is a "low pause" garbage collector with the option -XX:+UseConcMarkSweepGC.

Muthu
  • 156
  • 8
1

The causes can be diverse, without more information about your application: assuming it's not performing any recurring task on its own, one possible cause can be JConsole itself, as collecting information and sending it over via RMI consumes memory that can be quickly collected from Eden space.

You can test this assumption by profiling with a less "intrusive" tool (like enabling the GC verbose output with -verbosegc) and comparing memory usage profiles in similar settings.

As highlighted in other answers, also Tomcat can have its own overhead. You can test this other assumption by having by serving an "hello, world" servlet and comparing the memory consumption of the two with analogous tool.

stefanobaghino
  • 11,253
  • 4
  • 35
  • 63