I have an odd situation I am trying to figure out.
The Genesis:
I am running my program on a physical machine with 16 cores and 128GB of RAM. I am trying to determine why it is not using all available cores, typically it uses 20-25% CPU on average (so 4-5 cores of the 16). When I look at performance counters they show on the order of 60-70% Time in Garbage Collection.
For reference, I am using .NET Framework 4 and the TPL (Parallel.ForEach) to thread the performance-intensive portion of my program. I am limiting the number of threads to the number of cores.
The Problem:
I was creating a large number of objects, far too many for the garbage collector to handle efficiently and thus it spent a large amount of time in the garbage collector.
The Simple Solution thus far:
I am introducing object pooling to reduce the pressure on the garbage collector. I will continue pooling objects to improve performance, already pooling some objects reduced garbage collection from 60-70% of time to 45% of time and my program ran 40% faster.
The Nagging Question (the one I hope you will answer for me):
My program when running uses at most 14GB of the available RAM, compared to 128GB of RAM this is quite small. Nothing else is running on this machine (it is purely a testbed for me) and there is plenty of RAM available.
- If there is plenty of RAM available, why are any gen2 (or full) collections occurring at all? A fairly large number of these gen2 collections (in the thousands) are occurring. i.e. how is it determining the threshold to start a gen2 collection?
- Why doesn't the garbage collector simply delay any full collections until pressure on physical RAM reaches a higher threshold?
- Is there any way I can configure the garbage collector to wait for a higher threshold? (i.e. not bother collecting at all if no necessary)
EDIT:
I am already using the option to use the server garbage collector ... what I need to know is what is triggering a gen2 collection, not that the server garbage collector is better (I already know that).