Was wondering if anyone could shed some light on this.
I have an application which has a large memory footprint (& memory churn). There aren't any memory leaks and GCs tend to do a good job of freeing up resources.
Occasionally, however, a GC does not happen 'on time', causing an out of memory exception. I was wondering if anyone could shed any light on this?
I've used the REDGate profiler, which is very good - the application has a typical 'sawtooth' pattern - the OOMs happen at the top of the sawtooth. Unfortunately the profiler can't be used (AFAIK) to identify sources of memory churn.
Is it possible to set a memory 'soft limit', at which a GC should be forced? At the moment, a GC is only performed when the memory is at its absolute limit, resulting in OOMs.