14

My question is about order of execution guarantees in C# (and presumably .Net in general). I give Java examples I know something about to compare with.

For Java (from "Java Concurrency in Practice")

There is no guarantee that operations in one thread will be performed in the order given by the program, as long as the reordering is not detectable from within that thread-even if the reordering is apparent to other threads.

So the code

  y = 10;
  x = 5;
  a = b + 10;

may actually assign a=b+10 Before assigning y = 10

And in Java (from the same book)

Everything thread A does in or prior to a synchronized block is visible to thread B when it starts a synchronized block guarded by the same lock.

so in Java

 y = 10;
 synchronized(lockObject) {
     x = 5;
 }
 a = b + 10;

y = 10 and x = 5 are guaranteed to both run before a = b + 10 (I don't know whether y = 10 is guaranteed to run before x = 5).

What guarantees does C# code make for the order of execution for the C# statements

 y = 10;
 lock(lockObject) {
     x = 5;
 }
 a = b + 10;

I am particularly interested in an answer that can provide a definitive reference or some other really meaningful justification as guarantees like this are hard to test because they are about what the compiler is allowed to do, not what it does every time and because when they fail you are going to have very hard to reproduce intermittent bugs when threads hit things in just the wrong order.

  • If you have a unit code where order of execution matters, you should synchronize the whole unit, instead of relying on what the compiler may or may not optimize. – Justin Morgan May 13 '11 at 18:44
  • The definitive resource is the CLI specification, that specifies the memory model and execution model that .net is constrained to. – Chris Chilvers May 13 '11 at 18:45
  • The conclusion you make about your second block of code does not follow from your second book quote. Unless y x and a are volatile, the order of operations within the thread isn't coupled to memory visibility in other threads. – Affe May 13 '11 at 18:48
  • Also (at least in older versions of java) the synchronized block did not guarantee that because it only had a memory barrier on entering the sync block, and not upon leaving. Hence why double-check locking on java was broken. I don't know if this has changed in newer versions as I last looked around 2003. So pay attention to where memory barriers are, and if things have release vs acquire semantics. – Chris Chilvers May 13 '11 at 18:50
  • 2
    There are several points in the C# Language Specification that you should consider, available here: http://msdn.microsoft.com/en-us/library/ms228593.aspx Specifically, look at sections 3.10, 8.12, and 10.5.3, and there may be others. – Anthony Pegram May 13 '11 at 18:50
  • The processor has a tendency to reorder instructions when its safe for performance reasons, so id expect that behavior to bubble up to this level. – 3Dave May 13 '11 at 19:18
  • @Affe The quote itself should probably say "everything in the program code prior to"... The surrounding text in the book is clearer. Sorry about that. However, it sounds like it is a different objection you are making. Could you clarify? (It sounds like you are saying that for variables not in cpu registers the change actually takes place in the L1 cache, so it won't be visible in memory, but shouldn't another thread which tries to use the value have to load the memory into the L1 cache as well, at which point the CPU does a quick search and finds it's already been loaded into L1)? –  May 16 '11 at 14:48

4 Answers4

6

I'm worried that you're even asking this but since you asked.

y = 10;
Thread.MemoryBarrier();
x = 5;
Thread.MemoryBarrier();
a = b + 10;
Thread.MemoryBarrier();
// ...

From msdn

Synchronizes memory access as follows: The processor executing the current thread cannot reorder instructions in such a way that memory accesses prior to the call to MemoryBarrier execute after memory accesses that follow the call to MemoryBarrier.

Krypes
  • 561
  • 2
  • 10
6

ISO 23270:2006 — Information technology—Programming languages—C#, §10.10 says (and I quote):

10.10 Execution order Execution shall proceed such that the side effects of each executing thread are preserved at critical execution points. A side effect is defined as a read or write of a volatile field, a write to a non-volatile variable, a write to an external resource, and the throwing of an exception. The critical execution points at which the order of these side effects shall be preserved are references to volatile fields (§17.4.3), lock statements (§15.12), and thread creation and termination. An implementation is free to change the order of execution of a C# program, subject to the following constraints:

  • Data dependence is preserved within a thread of execution. That is, the value of each variable is computed as if all statements in the thread were executed in original program order. (emphasis mine).

  • Initialization ordering rules are preserved (§17.4.4, §17.4.5).

  • The ordering of side effects is preserved with respect to volatile reads and writes (§17.4.3). Additionally, an implementation need not evaluate part of an expression if it can deduce that that expression’s value is not used and that no needed side effects are produced (including any caused by calling a method or accessing a volatile field). When program execution is interrupted by an asynchronous event (such as an exception thrown by another thread), it is not guaranteed that the observable side effects are visible in the original program order.

The other CLI standards are likewise available gratis from the ISO at

But if you are worried about multi-threading issues, you'll need to dig deeper into the standards and understand the rules about atomicity. Not every operation is warranted to be atomic. If you are multi-threaded and invoking methods that reference anything but local variables (e.g., instance or class (static) members) without serializing access via lock, a mutex, a semaphore, or some other serialization technique, you are leaving yourself open to race conditions.

Nicholas Carey
  • 71,308
  • 16
  • 93
  • 135
3

What you are looking for is Thread.MemoryBarrier

However they may not be necessary for Microsoft's current implementation of .NET. See this SO question for more details.

Community
  • 1
  • 1
Scott Chamberlain
  • 124,994
  • 33
  • 282
  • 431
1

Without having read anything about .NET memory model, I can assure you .NET gives you at least those guarantees (i.e. lock behaves like an acquire an unlock like a release), since they are the weakest guarantees that are useful.

ninjalj
  • 42,493
  • 9
  • 106
  • 148