27

Reading Joseph Albahari's threading tutorial, the following are mentioned as generators of memory barriers:

  • C#'s lock statement (Monitor.Enter/Monitor.Exit)
  • All methods on the Interlocked class
  • Asynchronous callbacks that use the thread pool — these include asynchronous delegates, APM callbacks, and Task continuations
  • Setting and waiting on a signaling construct
  • Anything that relies on signaling, such as starting or waiting on a Task

In addition, Hans Passant and Brian Gideon added the following (assuming none of which already fits into one of the previous categories):

  • Starting or waking up a thread
  • Context switch
  • Thread.Sleep()

I was wondering if this list was complete (if a complete list could even be practically made)

EDIT additions suggested:

  • Volatile (reading implies an acquire fence, writing implies a release fence)
Community
  • 1
  • 1
Ohad Schneider
  • 36,600
  • 15
  • 168
  • 198
  • This going to be about [Memory Models](http://msdn.microsoft.com/en-us/magazine/cc163715.aspx). On x86/x64 every Write is a fence. Read the part about the Itanium in Albahari's article. This list is not going to be of much practical use. – H H Jul 05 '11 at 11:35
  • Thanks, I'm aware of that article. Actually according to it, in .NET 2 all writes are write fences (regardless of hardware architecture). I'm interested in other .NET implied memory barriers. – Ohad Schneider Jul 05 '11 at 11:41
  • 1
    @ohadsc: The x86-like "all writes are write fences" behaviour *is a feature of Microsoft's CLR*. The ECMA CLI spec doesn't provide any such guarantee, and I'm not sure what strong guarantees other implementations provide; for example, Mono. – LukeH Jul 05 '11 at 12:10
  • @LukeH - True, I should have been more specific – Ohad Schneider Jul 05 '11 at 12:18

3 Answers3

39

Here is my take on the subject and to attempt to provide a quasi-complete list in one answer. If I run across any others I will edit my answer from time to time.

Mechanisms that are generally agreed upon to cause implicit barriers:

  • All Monitor class methods including the C# keyword lock
  • All Interlocked class methods.
  • All Volatile class methods (.NET 4.5+).
  • Most SpinLock methods including Enter and Exit.
  • Thread.Join
  • Thread.VolatileRead and Thread.VolatileWrite
  • Thread.MemoryBarrier
  • The volatile keyword.
  • Anything that starts a thread or causes a delegate to execute on another thread including QueueUserWorkItem, Task.Factory.StartNew, Thread.Start, compiler supplied BeginInvoke methods, etc.
  • Using a signaling mechanism such as ManualResetEvent, AutoResetEvent, CountdownEvent, Semaphore, Barrier, etc.
  • Using marshaling operations such as Control.Invoke, Dispatcher.Invoke, SynchronizationContext.Post, etc.

Mechanisms that are speculated (but not known for certain) to cause implicit barriers:

  • Thread.Sleep (proposed by myself and possibly others due to the fact that code which exhibits a memory barrier problem can be fixed with this method)
  • Thread.Yield
  • Thread.SpinWait
  • Lazy<T> depending on which LazyThreadSafetyMode is specified

Other notable mentions:

  • Default add and remove handlers for events in C# since they use lock or Interlocked.CompareExchange.
  • x86 stores have release fence semantics
  • Microsoft's implemenation of the CLI has release fence semantics on writes despite the fact that the ECMA specification does not mandate it.
  • MarshalByRefObject seems to suppress certain optimizations in subclasses which may make it appear as if an implicit memory barrier were present. Thanks to Hans Passant for discovering this and bringing it to my attention.1

1This explains why BackgroundWorker works correctly without having volatile on the underlying field for the CancellationPending property.

Community
  • 1
  • 1
Brian Gideon
  • 47,849
  • 13
  • 107
  • 150
  • 2
    Nice! (+1) It was Hans Passant who mentioned the context switch in his comment here: http://stackoverflow.com/q/6574389/67824. Regarding the event handlers, lock(this) was actually replaced with an Interlocked implementation: http://stackoverflow.com/questions/3522361/add-delegate-to-event-thread-safety/3522556#3522556 – Ohad Schneider Aug 04 '11 at 11:36
  • The way I've come to think about memory barriers is that if 2 threads *could* access some shred state at the *same exact time* - a memory barrier (preferably lock) is needed. Otherwise, whatever mechanism that was put in place to prevent the concurrency (e.g. signaling, waiting, starting thread B only after thread A accessed the shared state) has probably brought up the required memory barriers. Would you agree with this approach ? – Ohad Schneider Aug 04 '11 at 11:43
  • @ohadsc: I didn't realize add/remove handlers were now implemented with `Interlocked.CompareExchange`. Nice catch! – Brian Gideon Aug 04 '11 at 13:12
  • @ohadsc: Yes, I think I generally agree with that statement. – Brian Gideon Aug 04 '11 at 13:16
  • I'm glad to hear that, far less headaches with this approach :) – Ohad Schneider Aug 04 '11 at 15:34
  • About Thread.sleep creating a barrier, could you give some references? – thewpfguy May 16 '13 at 04:17
  • @thewpfguy: None sorry. That's why it is staying in my speculation list. Though, knowing how it works behind the scenes I'm nearly 100% convinced that it *always* generates a barrier. I know for certain that it *sometimes* does because it's easy to demonstrate. – Brian Gideon May 16 '13 at 23:38
  • SpinLock.Exit() and SpinLock(true) also generates memory barrier - as they uses Interlocked construct, but it might be helpful to have them explicitly in the list. – Jan May 30 '14 at 07:58
  • Would you have a reference for Join() causing a barrier ? – AfterWorkGuinness Jun 07 '16 at 18:02
  • @AfterWorkGuinness: No, I don't. But, it only makes sense that it does otherwise things wouldn't really work right. Also, no one really disputes that it does so it seems safe to assume it unless contradictory evidence is presented. – Brian Gideon Jun 07 '16 at 18:56
12

I seem to recall that the implementations of the Thread.VolatileRead and Thread.VolatileWrite methods actually cause full fences, not half fences.

This is deeply unfortunate, as people might have come to rely upon this behaviour unknowingly; they might have written a program that requires a full fence, think they need a half fence, think they are getting a half fence, and will be in for a nasty surprise if an implementation of these methods ever does provide a half fence.

I would avoid these methods. Of course, I would avoid everything involving low-lock code, not being smart enough to write it correctly in anything but the most trivial cases.

Eric Lippert
  • 647,829
  • 179
  • 1,238
  • 2,067
  • Of course, this is for trivial cases (such as the one depicted in the thread I linked to). I can assure you I'm not smart enough as well :) – Ohad Schneider Jul 05 '11 at 15:47
  • Looking at the BCL code for VolatileRead/Write (C#4), it looks like only half-fences are set-up (i.e. a call to Thread.MemoryBarrier() only prior to Reads and only after Writes.) Of course I may just be misunderstanding what you meant by half vs. full fence. – dlev Jul 05 '11 at 23:08
  • 2
    @dlev: The full-on MemoryBarrier has a stronger - and more expensive - effect in weak memory models than simply doing a load-with-acquire IL instruction, as you normally would when reading a volatile field. – Eric Lippert Jul 06 '11 at 00:00
  • Personally, I'd like to use those methods (where the behaviour of the access is highlighted at the place where the access happens) and eschew `volatile` (where the behaviour of the access is highlighted where the field is, perhaps many lines of code away). As you say though, the way things stand this is both safer than it appears (and hence perhaps suddenly less safe with an implementation change). It's also more expensive (because while I also avoid low-lock code for real-world use unless I've a definite gain to make, I do like optimising to stupid degrees when experimenting for fun). – Jon Hanna Dec 07 '11 at 15:13
3

The volatile keyword acts as a memory barrier too. See http://blogs.msdn.com/b/brada/archive/2004/05/12/130935.aspx

Leonard Brünings
  • 12,408
  • 1
  • 46
  • 66
  • True, I'll add it to the list – Ohad Schneider Jul 05 '11 at 11:43
  • `volatile` doesn't cause a memory barrier. In the link, a memory barrier is used to prevent reordering, but that doesn't mean that if you prevent reordering you get a memory barrier! – configurator Jul 05 '11 at 12:01
  • AFAIK volatile causes all reads/writes to be executed before the volatile variable is read/written. Or am I mistaken @configurator? – Leonard Brünings Jul 05 '11 at 12:13
  • From Albahari's tutorial: **The volatile keyword instructs the compiler to generate an acquire-fence on every read from that field, and a release-fence on every write to that field** – Ohad Schneider Jul 05 '11 at 12:19
  • 1
    I could be wrong. Let me qualify my comment appropriately: _as far as I know_, `volatile` doesn't cause a memory barrier, but does prevent reordering of reads and writes; a memory barrier is in a sense a stronger promise than volatile field reads or writes are. – configurator Jul 05 '11 at 16:19
  • @Damokles, @ohadsc: See Eric's comment to his own answer here - paraphrased: "a memory barrier has a stronger effect than reading a volatile field" – configurator Jul 06 '11 at 17:03
  • @configurator he said "a full-on memory barrier". There's more than one type of memory barrier. Reads and writes from volatiles do have memory barriers, but not the full memory barrier that `Thread.MemoryBarrier` has. – Jon Hanna Dec 07 '11 at 15:05