7

I keep on running across code that uses double-checked locking, and I'm still confused as to why it's used at all.

I initially didn't know that double-checked locking is broken, and when I learned it, it magnified this question for me: why do people use it in the first place? Isn't compare-and-swap better?

if (field == null)
    Interlocked.CompareExchange(ref field, newValue, null);
return field;

(My question applies to both C# and Java, although the code above is for C#.)

Does double-checked locking have some sort of inherent advantage compared to atomic operations?

user541686
  • 205,094
  • 128
  • 528
  • 886
  • I don't think there can be an exact answer for your question, other than that those people probably haven't heard it is broken, and that double-checked locking is a semi-obvious naive solution to circumventing performance hits for synchronization... – Merlyn Morgan-Graham May 23 '11 at 04:17
  • @Merlyn: But is it still broken? I thought it was fixed in some version of Java (Edit: as of JDK5, it apparently works if you also use `volatile`), and I still see the code... – user541686 May 23 '11 at 04:18
  • 1
    @Mehrdad: http://stackoverflow.com/questions/394898/double-checked-locking-in-net/394932#394932. Still, if I had an alternative, I wouldn't use an idiom for a "portable" language that broke on certain versions. I don't know the performance of atomic operations (or much about code at that level, really ;)), but I'm betting it's better than locking, and probably allows multiple threads to continue. – Merlyn Morgan-Graham May 23 '11 at 04:21
  • The **Related** sidebar showed pretty much a duplicate question with a different title I didn't see first, so I'm closing my own question, haha: [Java Concurrency: CAS vs Locking](http://stackoverflow.com/questions/2664172/java-concurrency-cas-vs-locking) – user541686 May 23 '11 at 04:27
  • @Mehrdad - except you said you were interested in C# as well. C# and Java do have quite a few differences. – Damien_The_Unbeliever May 23 '11 at 04:29
  • @Damien: Yes; sorry I accepted an answer right before I saw your post, but I +1'd you since I think your answer's great. :-) I completely didn't notice that double-checked locking prevents multiple initializations. – user541686 May 23 '11 at 04:31
  • DCL avoids execution of the factory multiple times with establishment of an exclusive region. The code shown does not, and thus is only relevant if 'newValue' is trivial to calculate. Also, a primary concern with broken DCL is that a partial-constructed object can be observed (although I've not seen such a "hypothetical" case demonstrated in JDK5 or MSFT CLR implementations; the MSFT CLR goes through extra hoops to allow a standard DCL to work on weak-MM architects, which is beyond the 'spec'). In this case the "correction" is to add memory barriers, which may or may not be implied with CAS. – user2864740 Oct 17 '18 at 22:17

5 Answers5

13

Does double-checked locking have some sort of inherent advantage compared to atomic operations?

(This answer only covers C#; I have no idea what Java's memory model is like.)

The principle difference is the potential race. If you have:

if (f == null)
    CompareExchange(ref f, FetchNewValue(), null)

then FetchNewValue() can be called arbitrarily many times on different threads. One of those threads wins the race. If FetchNewValue() is extremely expensive and you want to ensure that it is called only once, then:

if (f == null)
    lock(whatever)
        if (f == null)
            f = FetchNewValue();

Guarantees that FetchNewValue is only called once.

If I personally want to do a low-lock lazy initialization then I do what you suggest: I use an interlocked operation and live with the rare race condition where two threads both run the initializer and only one wins. If that's not acceptable then I use locks.

Eric Lippert
  • 647,829
  • 179
  • 1,238
  • 2,067
4

In C#, it's never been broken, so we can ignore that for now.

The code you've posted assumes that newValue is already available, or is cheep to (re-) calculate. In double-checked locking, you're guaranteed that only one thread will actually perform the initialization.

That being said, however, in modern C#, I'd normally prefer to just use a Lazy<T> to deal with the initialization.

Damien_The_Unbeliever
  • 234,701
  • 27
  • 340
  • 448
  • +1 Oh I got confused first but I see what you mean now; yeah, the point about the cheapness of the initialization is really important, one which I didn't notice at all; thanks! :) – user541686 May 23 '11 at 04:30
  • Awesome catch! This could indeed be a concern. – Matt May 23 '11 at 04:50
  • 1
    see this link: http://en.wikipedia.org/wiki/Double-checked_locking#Usage_in_Microsoft_.NET_.28Visual_Basic.2C_C.23.29. My reading is that DCL **is** broken in C# ... unless you implement it using an explicit write barrier or a `volatile`. That is certainly not how the original DCL idiom was implemented ... so saying "never been broken" is overstating things. – Stephen C May 23 '11 at 04:55
  • 1
    @Stephen, DCL without volatile is not required to work properly by the CLI. However, on the CLR it happen does work correctly. Microsoft has even published the volatile-less version on MSDN, so they really cannot afford to break it. However, it is a bad idea to rely on this, since it could affect portability, and even though they should not, Microsoft might still break it on some future CLR, especially one for a different platform. – Kevin Cathcart May 29 '11 at 16:51
  • It wasn't that "C# was never broken", rather (that at least since 2.0), the Microsoft CLR implementation "ensures DCL without an explicit memory barrier cannot observe a partially-constructed object", even when not required by the C# spec. Information found in https://msdn.microsoft.com/en-us/magazine/jj883956.aspx - although a bit dated, it specifically discussed the CLR in face of weaker memory models (IA64, ARM) and store buffers. Due to the vast amount of code relying on this, I can only [hope] that this same implementation "guarantee" is made on other implementations.. – user2864740 Oct 17 '18 at 22:20
1

Double-checked locking is used when the performance degradation encountered when locking on the entire method is significant. In other words, if you do not wish to synchronize on the object (on which the method is invoked) or the class, you may use double-checked locking.

This may be the case if there is a lot of contention for the lock and when the resource protected by the lock is expensive to create; one would like to defer the creation process until it is required. Double checked locking improves performance by first verifying a condition (lock hint) to aid in determining whether the lock must be obtained.

Double checked locking was broken in Java until Java 5, when the new memory model was introduced. Until then, it was quite possible for the lock hint to be true in one thread, and false in another. In any case, the Initialization-on-Demand-Holder idiom is a suitable replacement for the double-checked locking pattern; I find this much easier to understand.

Vineet Reynolds
  • 76,006
  • 17
  • 150
  • 174
0

Well, the only advantage that comes to my mind is (the illusion of) performance: check in a non-thread-safe way, then do some locking operations to check the variable, which may be expensive. However, since double checked locking is broken in a way that precludes any firm conclusions from the non-thread-safe check, and it always smacked of premature optimization to me anyway, I would claim no, no advantage - it is an outdated pre-Java-days idiom - but would love to be corrected.

Edit: to be clear(er), I believe double checked locking is an idiom that evolved as a performance enhancement on locking and checking every time, and, roughly, is close to the same thing as a non-encapsulated compare-and-swap. I'm personally also a fan of encapsulating synchronized sections of code, though, so I think calling another operation to do the dirty work is better.

Matt
  • 10,434
  • 1
  • 36
  • 45
  • Even if it *was* correct (which I believe it is in recent versions of Java, though I could be wrong), would there be any advantage? Doesn't if/CAS do the same thing? – user541686 May 23 '11 at 04:17
  • I don't believe there is any advantage, unless you can argue that the inline operations are more clear than a call somewhere else, or you worry about the performance implications of an external call? Trivial, COBOL-esque concerns, IMHO. – Matt May 23 '11 at 04:23
  • Misses to point out key differences between DCL and CAS. CAS would be 'more equivalent' to a lock around *only an assignment* operator, where all computation has been previously done. DCL is designed to evaluate the factory only once. While DCL can be implemented with a CAS (and lock or other exclusive region), they are not synonyms. Furthermore, CAS is implemented differently than `volatile`, so using CAS in a DCL (as opposed to a volatile-implied barrier) "may" break the DCL and expose partially-constructed objects. – user2864740 Oct 17 '18 at 22:25
0

It "make sense" on some level that a value that would only change at startup shouldn't need to be locked to be accessed, but then you should add some locking (which you probably aren't going to need) just in case two threads try to access it at start up and it works most of the time. Its broken, but I can see why its an easy trap to fall in to.

Yaur
  • 7,333
  • 1
  • 25
  • 36