Folks - I have a task A which I perform sequentially ~100 times. Each task A has many tasks B which are processed in parallel. B stores data which is needed after all A's are completed. So the program's memory footprint grows over time. These are "long running" tasks.
I found that A was taking a long time to complete with each successive A. I implemented Server Garbage Collection and there was a dramatic improvement - A's time-to-complete was cut in half! However, A's time-to-complete was still growing with each successive A - linearly. So by the 10th A, the improvements from Server Garbage Collection were irrelevant - the full 100 A's would never complete in a reasonable time and I would need to stop the process.
My hypothesis is that the growing memory footprint is causing GC to do more work over time, slowing everything down.
- Do you have any other hypotheses I can test?
- If my hypothesis is worth exploring, what solutions can I pursue? Should I become more "hands-on" with the Garbage Collector? Maybe I should flush in-memory data to disk and suck it up when I need it?
EDIT
I forgot to mention that each B calls GC.Collect and GC.WaitForPendingFinalizers because I'm automating COM and that's the only way to ensure the COM server process is released.