3

Its a vague question. So please feel free to ask for any specific data.

We have a tomcat server running with two web-service's. One tomcat built using spring. We use mysql for 90% of tasks and mongo for caching of jsons (10%). The other web-service is written using grails. Both the services are medium sized codebases (About 35k lines of code each)

The computation only happens when there is an HTTP request (No batch processing). With about 2000 database hits per request (I know its humongous. We are working on it). The request rate is about 30 req/min. For one particular request, there is Image processing which is quite memory expensive. No JNI anywhere

We have found a weird behavior. Last night, I can confirm that there was no request to the server for about 12 hours. But when I look at the memory consumption, it is very confusing: enter image description here

Without any requests, the memory keeps jumping from 500Mb to 1.2Gb (700 Mb jump is worrysome). There is no computation on server side as mentioned. I am not sure if its a memory leak:

  1. The memory usage comes down. (Things would have been way easier if the memory didnt come down).
  2. This behavior is reproducable with caches based on SoftReference or so. With full gc's. But I am not using them anywhere (Not sure if something else is using it)

What else can be the reason. is it a cause to worry?

PS: We have had Our of Memory Crashes (Not errors but JVM crash) quite frequently very recently.

Jatin
  • 31,116
  • 15
  • 98
  • 163
  • connection pool leakage? – blurfus Dec 23 '14 at 07:03
  • Do you use ThreadLocal variables? – shazin Dec 23 '14 at 07:05
  • @shazin ThreadLocal no where. – Jatin Dec 23 '14 at 07:05
  • @ochi Can be. But can it cause so much consumption – Jatin Dec 23 '14 at 07:08
  • 1
    Make sure any third party libraries you use also doesn't have ThreadLocal, because ThreadLocal with Thread Pooling may cause unnecessary memory retention like this. – shazin Dec 23 '14 at 07:08
  • @Jatin yes, the leak would consume all memory (up to the allocated max limit) only forced to free some by garbage collection – blurfus Dec 23 '14 at 07:10
  • Edit: It is once every 30 minutes. That is 700MB / 30 minutes = 0.3MB / s. This is not jumping, jumping would be 100MB/s... This could be logging, connection pooling, etc. And: After collection, the memory is "back to normal" again. I'm pretty sure the graph will look the same, if you would undeploy your services... – slowy Dec 23 '14 at 08:29
  • Did you take a heap dump and analyze it? – Andy Dufresne Dec 23 '14 at 09:15
  • @AndyDufresne As soon as I take a heapdump, it results in full gc. and hence the information is lost – Jatin Dec 24 '14 at 08:22
  • Yes I realized that. You could use Java Mission Control or a java profiler as suggested here - http://stackoverflow.com/questions/23393480/can-heap-dump-be-created-for-analyzing-memory-leak-without-garbage-collection. Keep us updated on how the analysis for this goes. I am curious about it :) – Andy Dufresne Dec 24 '14 at 09:10

1 Answers1

1

This is actually normal behavior. You're just seeing garbage collection occur.

John Thompson
  • 514
  • 3
  • 7