Have a memory allocation question I'd like your help with. We've analysed some of our services in top and we note that they have a RES value of about 1.8GB, which as far as I understand things means they're holding on to 1.8GB of memory at that time. Which would be fine if we'd just started them (they essentially read from a cache, do processing, and push off to another cache) but seeing as we still see this after CPU-intensive processing is completed, we're wondering if it means something isn't being GC'ed as we expected.
We run the program with the following parameters: -Xms256m -Xmx3096m which as I understand means an initial heap size of 256, and a maximum heap size of 3096.
Now what I'd expect to see is the heap grow as needed initially, and then shrink as needed as the memory becomes deallocated (though this could be my first mistake). What we actually see with jvisualvm is the following:
- 3 mins in: used heap is 1GB, heap size is 2GB
- 5 mins in: we've done processing, so used heap drops dramatically to near enough zilch, heap size however only drops to about 1.5GB
- 7 mins ->: small bits of real time processing periodically, used heap only ever between 100-200MB or so, heap size however remaining constant at about 1.7GB.
My question would be, why hasn't my heap shrunk as I perhaps expected it to? Isn't this robbing other processes on the linux box of valuable memory, and if so how could I fix it? We do see out of memory errors on it sometimes, and with these processes being allocated the most 'unexpected' memory size, I thought it best to start with them.
Cheers, Dave.
(~please excuse possible lack of understanding on JVM memory tuning!)