GC heap(s) are per process, not per AppDomain - so for correct estimates you'd need to spawn a separate process.
You can even host CLR yourself to influence segment sizes and get notifications. However if nothing else is running in the process and you're OK with estimate then GC.GetTotalMemory(). This would however need to be executed in 'NoGC' region if you are interrested in 'total ever consumed memory during the method run' as opposed to 'maximum total memory being used at any point of time' - as GC can trigger several times during your method run.
To limit perf/resources impact of spawning processes, you can spawn N processes - where N is your desired concurrency level - and than have each process pull tasks from work-stealing queue in central process, while the subprocesses process request synchronously.
A dirty idea how it can look like (you'd need to handle results reporting plus 100 other 'minor' things):
Main process:
public void EnqueueWork(WorkRequest request)
{
_workQueue.Enqueue(request);
}
ConcurrentQueue<WorkRequest> _workQueue = new ConcurrentQueue<WorkRequest>();
[OperationContract]
public bool GetWork(out WorkRequest work)
{
return _workQueue.TryDequeue(out work);
}
Worker processes:
public void ProcessRequests()
{
WorkRequest work;
if (GetWork(out work))
{
try
{
//this actually preallocates 2 * _MAX_MEMORY - for ephemeral segment and LOH
// it also performs full GC collect if needed - so you don't need to call it yourself
if (!GC.TryStartNoGCRegion(_MAX_MEMORY))
{
//fail
}
CancellationTokenSource cts = new CancellationTokenSource(_MAX_PROCESSING_SPAN);
long initialMemory = GC.GetTotalMemory(false);
Task memoryWatchDog = Task.Factory.StartNew(() =>
{
while (!cts.Token.WaitHandle.WaitOne(_MEMORY_CHECK_INTERVAL))
{
if (GC.GetTotalMemory(false) - initialMemory > _MAX_MEMORY)
{
cts.Cancel();
//and error out?
}
}
})
DoProcessWork(work, cts);
cts.Cancel();
GC.EndNoGCRegion();
}
catch (Exception e)
{
//request failed
}
}
else
{
//Wait on signal from main process
}
}