I have recently made an application which synchronises values on a remote server with a local HashMap
. It works by looping through the HashMap
every x milliseconds and fetching a new value from a server.
Everything is working well but when I looked at the amount of memory it was using I noticed that the garbage collector was not being very effective and fairly quickly it was using several gigabytes of memory. Increasing the number of threads it works on or decreasing how many values it has to fetch seemed to fix this problem but I was still not sure what was causing it, this prompted me to look at how the memory was being used.
I noticed that there was a large amount of FieldGetTask
objects (the object that I feed to a thread pool which gets executed get the new value for a field) and performing a garbage collection had almost no impact on these numbers.
I am assuming that these objects are being stored in some queue somewhere and when there are not enough threads to process them, they build up. Am I right in thinking this? And if so, is there a way to do it so that if the pool can not execute a task at the current time, the execute
method blocks until it can?