4

Yes, I know how to use GC.SuppressFinalize() - it's explained here. I've read many times that using GC.SuppressFinalize() removes the object from finalization queue and it's assumed this is good because it relieves the GC from extra work calling the finalizer.

So I crafted this (mostly useless) code where the class implements IDisposable as in the linked to answer:

public class MyClass : IDisposable
{
   ~MyClass()
   {
       Dispose(false);
   }

   public void Dispose()
   {
       Dispose(true);
       GC.SuppressFinalize(this);
   }

   private bool disposed = false;

   protected virtual void Dispose(bool disposing)
   {
       if (!disposed)
       {
           System.Threading.Thread.Sleep(0);
           disposed = true;
       }
   }
}

Here I'm using Sleep(0) to imitate some short unmanaged work. Note that because of boolean field in the class this unmanaged work is never executed more than once - even if I call Dispose() multiple times or if the object is first disposed and then finalized - in any of these cases the "unmanaged work" is executed only once.

Here's the code I use for measurements:

var start = DateTime.UtcNow;
var objs = new List<Object>();
for (int i = 0; i < 1000 * 1000 * 10; i++)
{
    using (var obj = new MyClass())
    {
        objs.Add(obj);
    }
}
objs = null;
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
GC.WaitForPendingFinalizers();
var duration = (DateTime.UtcNow - start).TotalMilliseconds;
Console.WriteLine(duration.ToString());

Yes, I add objects which were just disposed, into List.

So I run the code above and it runs in 12.01 seconds (Release, without debugger). I then comment out the GC.SuppressFinalize() call and run the code again and it runs in 13.99 seconds.

The code which calls GC.SuppressFinalize() is 14.1 percent faster. Even in this ridiculous case where all is done to stress the GC (you rarely craft ten million objects with finalizer in row, don't you?) the difference is about 14%.

I guess that in realistic scenarios where just a fraction of objects has finalizers in the first place and those objects are also not created massively the difference in overall system performance would be negligible.

Am I missing something? Is there a realistic scenario where I would see notable benefits from using GC.SuppressFinalize()?

sharptooth
  • 167,383
  • 100
  • 513
  • 979
  • 2
    Unless you've measured that the disposal of objects is a bottle neck in your code I'd just use the standard `IDisposable` pattern and trust that MS has done the relevant optimisations for you in the core code base. For me this is premature optimisation 90% of the time. Optimisations should always be based on measurements, blindly following patterns for no reason is (one) of the root of all evils. – Liam Feb 15 '18 at 14:09
  • @Liam Well, that's sort of what I'm asking about. In the code above "useful" part of disposal runs once for every object no matter if `SuppressFinalize()` is called. What would be a scenario where I'd see substantial difference? – sharptooth Feb 15 '18 at 14:14
  • It's interesting that the linked question is nearly 10 years old now. A lot of changes happened in the framework since that was asked – Liam Feb 15 '18 at 14:14
  • 1
    It runs Sleep(0) 10 million times extra on the finalizer thread. Why this should take more than a second is not obvious, a tenth of a microsecond for a kernel call is not untypical. Actual behavior of Sleep(0) is highly unpredictable, it yields the processor if there is another thread ready to run. Not something you'd ever want to use in a perf measurement, too random. The more typical perf robbing behavior of a finalizer is a hard page fault in unmanaged memory. – Hans Passant Feb 15 '18 at 14:32
  • 1
    Why would you even use the disposable anti-pattern in the first place? If you directly own unmanaged resources, you want a SafeHandle. If you own them indirectly, you want `Dispose` without a finalizer. There are only some rare special scenarios where you want a traditional finalizer (mostly debugging/logging). – CodesInChaos Feb 15 '18 at 14:34
  • I think it boils down to this line (*The SuppressFinalize optimization is not trivial*) of [this answer](https://stackoverflow.com/a/151244/542251) @CodesInChaos – Liam Feb 15 '18 at 14:38
  • @Liam Honestly I don't get that line even when reading it in context. – sharptooth Feb 20 '18 at 07:12

1 Answers1

6

To give a trivial example:

protected virtual void Dispose(bool disposing)
{
    if(!disposing)
    {
        HardStopSystem(
            $"Critical failure: an instance of '{GetType().Name}' was not disposed");
    }
}

This will only be reached if it hits the finalizer. Yes, it is forced and artifical, but: this is based on real code that I've seen in many systems where it is vital that things are correctly disposed.

Another example would include ensuring that things like unmanaged pointers aren't released twice (which would fail). Now yes, you could set a marker to say "don't do this again", but then you get into the topic of immutable types, etc.


the difference in overall system performance would be negligible.

That would depend a lot on the scenario. Ultimately, finalizers are more about correctness than performance. IDisposable is more related to performance, since it relates to timeliness.

Liam
  • 27,717
  • 28
  • 128
  • 190
Marc Gravell
  • 1,026,079
  • 266
  • 2,566
  • 2,900