2

Any time anyone asks how to implement a serializable singleton in C#, the basic advice is always to implement ISerializable, and then in GetObjectData to set the type to a helper type that implements IObjectReference. Then that type's GetRealObject function is supposed to return the singleton instance.

That's actually how it's done in the sample code at this page: https://msdn.microsoft.com/en-us/library/system.runtime.serialization.iobjectreference.aspx

My question is why doesn't anyone recommend that the singleton itself implement IObjectReference? Is it not supposed to work in certain circumstances?

Consider this, for example:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Linq;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Formatters.Binary;
using System.Text;

class Program {
    // Works:
    [Serializable]
    class Singleton1 : ISerializable {
        public static readonly Singleton1 instance = new Singleton1();

        private Singleton1() {
        }

        public void GetObjectData(SerializationInfo info, StreamingContext context) {
            info.SetType(typeof(Helper));
        }

        [Serializable]
        private class Helper : IObjectReference {
            public object GetRealObject(StreamingContext context) {
                return instance;
            }
        }
    }

    // Works:
    [Serializable]
    class Singleton2 : IObjectReference {
        public static readonly Singleton2 instance = new Singleton2();

        private Singleton2() {
        }

        public object GetRealObject(StreamingContext context) {
            return instance;
        }
    }

    // Does not work, of course:
    [Serializable]
    class Singleton3 {
        public static readonly Singleton3 instance = new Singleton3();

        private Singleton3() {
        }
    }

    static void Main(string[] args) {
        Console.WriteLine("Testing Singleton1");
        TestSingleton(Singleton1.instance);

        Console.WriteLine("Testing Singleton2");
        TestSingleton(Singleton2.instance);

        Console.WriteLine("Testing Singleton3, expect to fail.");
        TestSingleton(Singleton3.instance);
    }

    static void TestSingleton(object singletonInstance) {
        BinaryFormatter binaryFormatter = new BinaryFormatter();
        MemoryStream memoryStream = new MemoryStream();

        binaryFormatter.Serialize(memoryStream, singletonInstance);

        memoryStream.Position = 0;
        object newInstance = binaryFormatter.Deserialize(memoryStream);

        bool shouldBeTrue = object.ReferenceEquals(singletonInstance, newInstance);
        Debug.Assert(shouldBeTrue);
    }
}

Singleton1 is implemented the way that is normally recommended. Singleton2 implements IObjectReference directly. And of course, Singleton3 doesn't do anything special and fails.

I've never seen anyone recommend doing the way it's done with Singleton2 above. Why is that?

If I had to guess, I'd think maybe it could be one of two things:

  1. Maybe it could be because it'd fail in some circumstance. Maybe it'd theoretically fail in the future for some reason?
  2. Maybe because a second instance does exist briefly before the framework calls GetRealObject. But surely it's such a brief period that it's not important, right?
DarkTygur
  • 188
  • 1
  • 8
  • The odds that anybody recommends [using singletons](http://stackoverflow.com/questions/137975/what-is-so-bad-about-singletons) multiplied by the odds that anybody recommends [binary serialization](https://blogs.msdn.microsoft.com/dotnet/2016/02/10/porting-to-net-core/) does leave a rather low number. – Hans Passant Jul 19 '16 at 16:39
  • 1
    Likely because it would require that there be, temporarily, more than one instance of the singleton, which would violate its basic design. Singletons are often heavy. so doing this might cause problems. For instance, if the singleton opens a file in its constructor for caching stuff, the temporary second singleton could try to open the same file a second time, causing an exception. Now, it may be that **your specific singleton** doesn't have this problem, but others might, so official recommendations do not recommend doing this. – dbc Jul 19 '16 at 17:35
  • @dbc that sounds like an excellent reason. I hadn't considered heavy singletons. If you post that as an answer, I'll accept it - unless a better one comes up, of course. – DarkTygur Jul 19 '16 at 19:01

1 Answers1

2

Likely because deserializing an instance of the "real" singleton type would require that there be, temporarily, more than one instance of the singleton, which would violate its basic design principle.

Since singletons are often heavy, doing this might cause practical problems. For instance, if the singleton opens a file in its constructor for caching stuff, the temporary second singleton could try to open the same file a second time, causing an exception.

And in the specific case of serialization using BinaryFormatter, serializing the "real" singleton would result in all of its internal state being serialized (i.e. all public and private fields). This is probably not what is desired, since singletons often represent global session state rather than model state. Avoiding serialization of internal state would require marking all fields with [NonSerialized] which could become an easily-overlooked nuisance.

It may be that your specific singleton doesn't have any of the above problems, but others might, so official recommendations ought not to be recommending this. Instead a more complex design pattern is recommended that you can simplify yourself as long as you have verified that doing so will not cause issues like those mentioned above.

dbc
  • 104,963
  • 20
  • 228
  • 340