4

If you have multiple assignments of shared variables inside one lock code block, does it necessarily mean that all these changes are immediately visible to other threads, potentially running on other processors once they enter a lock statement on the same object - or is there no such guarantees?

A lot of the examples out there shows a single "set" or "get" of a common variable and goes into detail of memory barriers but what happens if a more complicated set of statements are inside? Potentially even function calls that does other things?

Something like this:

lock(sharedObject)
{
  x = 10;
  y = 20;
  z = a + 10;
}

If this code runs on another thread, which is possibly executed on another processor, does it make any guarantees about the "visibility" of the change?

lock (sharedObject)
{
  if (y == 10)
  {
     // Do something. 
  }
}

If the answer is no - perhaps and explanation of when these changes might become visible?

Dodgyrabbit
  • 3,107
  • 3
  • 26
  • 28

2 Answers2

5

A lock block includes a memory fence at the start and at the finish (start and end of the block). This ensures that any changes to memory are visible to other cores (e.g. other threads running on other cores). In your example, changes to x, y, z in your first lock block will be visible to any other threads. "Visible" means any values cached into a register will be flushed to memory, and any memory cached in the CPUs' cache will be flushed to physical memory. ECMA 334 details that a lock block is a block surrounded by Monitor.Enter and Monitor.Exit. Further, ECMA 335 details that Monitor.Enter "shall implicitly perform a volatile read operation..." and Monitor.Exit "implicitly perform a volatile write operation. This does mean that the modifications won't be visible to other cores/threads until the end of the lock block (after the Monitor.Exit), but if all your access to these variables are guarded by a lock, there can be no simultaneous access to said variables across different cores/threads anyway.

This effectively means that any variables guarded by a lock statement do not need to be declared as volatile in order to have their modifications visible to other threads.

Since the example code only contains an operation that relies on a single shared atomic operation (read and write of a single value to y) You could get the same results with:

try
{
  x = 10;
  y = 20;
  Thread.VolatileWrite(ref z, a + 10);
}

and

if(y == 10)
{
// ...
}

The first block guarantees that the write to x is visible before the write to y and the write to y is visible before the write to z. It also guarantees that if the writes to x or y were cached in the CPUs cache that that cache would be flushed to physical memory (and thus visible to any other thread) immediately after the call to VolatileWrite.

If within the if(y == 10) block you do something with x and y, you should return to using the lock keyword.

Further, the following would be identical:

try
{
  x = 10;
  y = 20;
  Thread.MemoryBarrier();
  z = a + 10;
}
Peter Ritchie
  • 35,463
  • 9
  • 80
  • 98
  • @neil. Read ecma 355, the term "visibility" also refers to whether changes to memory can be seen by other threads/cores. My answer details how changes to memory may not be seen by other thread/cores when the change occurs (register caching and CPU memory cache) see also http://msdn.microsoft.com/en-us/library/windows/hardware/ff540496(v=vs.85).aspx (it uses "see" rather than "visibility", but you get the idea). Also read ECMA 334 section 17.4.3 Volatile fields where it details "would be permissible for the store to result to be *visible* to the main thread after the store to finished" etc. – Peter Ritchie Jul 23 '12 at 14:11
  • 2
    This is not a matter of seeing or not seeing the change. The other process will always see the change so long as the change is on the same instance. This is a matter of when the change will be seen, not if the change can be seen. The lock block on it's own does not guarantee that other threads will see an atomic change to x, y and z nor will it guarantee that x, y or z are modified in the order in which the operations occur - the only thing that can guarantee that the operation is seen as atomic is that other code looking at it respects the mutex and locks on the same lock instance. – Neil Jul 23 '12 at 16:06
  • @Neil no, they always won't, without a volatile write. That's the point. If the variable is not volatile, the write can be cached to register or the write to memory could be in the CPUs cache. I.e the result of the write can not be "seen" by other cores/threads. Also, lock *does* have a guarantee that instructions will not be re-ordered, or otherwise changes that can be "seen" (avoiding "visible") from other threads will be seen in the same order as they were written in code. This guarantee is inherited from the try/catch guarantee since the code in a lock block is wrapped in a try block – Peter Ritchie Jul 23 '12 at 16:18
  • 1
    Which is why I said it is a matter of timing. The value that is in the register or cpu cache will eventually be written back to shared memory. The lock block does do what ever you need to do to make that happen. Implementation of how that is done is irrelevant - a lock block will be viewed as atomic by other threads, but only if they also respect the same mutex. Essentially that is what I am saying. The lock block on its own does nothing to protect x,y and z from anything in particular, only the whole of the applications code respecting the mutex will do that. – Neil Jul 23 '12 at 16:29
  • @Neil You haven't mentioned timing at all. Yes, the lock *does* do what it needs to do to make memory changes visible, but that's not due to synchronization; it's from the implicit memory barriers *and* the try/catch guarantees. You can create synchronized code without lock (or mutex, etc.) but, you won't get those visibility guarantees. – Peter Ritchie Jul 23 '12 at 16:51
  • let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/14298/discussion-between-neil-and-peter-ritchie) – Neil Jul 23 '12 at 17:28
  • +1 from me. I think the excerpts you quote from the ECMA specs pretty authoritatively answer the question. – Dan Tao Jul 23 '12 at 18:36
1

Forgive me if I'm misunderstanding your question (very possible); but I think you're operating on a confused blend of the concepts of synchronization and visibility.

The whole point of a mutex ("mutual exclusion") is to ensure that two blocks of code will not run simultaneously. So in your example, the first block:

lock(sharedObject)
{
  x = 10;
  y = 20;
  z = a + 10;
}

...and the second block:

lock (sharedObject)
{
  if (y == 10)
  {
     // Do something. 
  }
}

...will never execute at the same time. This is what the lock keyword guarantees for you.

Therefore, any time your code has entered the second block, the variables x, y, and z should be in a state that is consistent with a full execution of the first. (This is assuming that everywhere you access these variables, you lock on sharedObject in the same way you have in these snippets.)

What this means is that the "visibility" of intermediate changes within the first block is irrelevant from the perspective of the second, since there will never be a time when, e.g., the change to the value x has occurred but not to y or z.

Dan Tao
  • 125,917
  • 54
  • 300
  • 447
  • 1
    "Visibility" is relevant. The guarding against two threads running specific bits of code simultaneously is different from acquire and release semantics. Just because two pieces of code can't be run at the same time on different threads (and presumably, different cores) is completely different from whether changes to a variable are "visible" to other threads. "Visibility" has to deal with potentially cached values. See my answer for more details. "lock" also provides visibility guarantees. – Peter Ritchie Jul 23 '12 at 13:55
  • @PeterRitchie: I think I understand your point. I posted this answer because I interpreted the OP's question to be: "Is there a chance that another thread will see `x` equal to `10` but `y` *not* equal to `20`?"—i.e., if it were possible for intermediate changes from within a synchronized block of code to somehow become "visible" in other threads. I wanted to point out that as long as all of the code in question is synchronized, there is no chance for this to happen. It could be that the OP was actually asking the lower-level question, in which case I'd say your answer is more useful. – Dan Tao Jul 23 '12 at 14:11
  • For me, the use of "memory barrier" means the OP was asking about "visibility" not just about synchronization. Yes, there's no chance that those two pieces of code could be running at the same time. It's whether if the other block of code, running on a different core, can see what happened in other block of code on another core, *always*, when it runs. – Peter Ritchie Jul 23 '12 at 14:20
  • Yes, the question is about visibility. Synchronization is understood - multiple threads can't enter the code block enclosed by the lock statement. However, the gist of the question is rather - if thread A exits the block and another thread B enters a lock on the same object, is it *guaranteed* to see the new values immediately? If so - it would imply that all CPUs or cores that might have had variables cached in registers or it's caches would need to be discarded and forced to reload fresh copies from memory. – Dodgyrabbit Jul 23 '12 at 18:13
  • @Dodgyrabbit: Then I did misunderstand your question, and I apologize for telling you what you already knew. For what it's worth—and I fully admit this is purely anecdotal—my experience with .NET has generally been that with respect to memory models, the framework provides a rather strong shield to the developer ensuring the behavior one would intuitively/naively expect. For instance, the well-known shortcoming of double-checked locking in Java does not exist in .NET (last time I checked anyway). I think the same is true in this case (naive expectations prove to be accurate). – Dan Tao Jul 23 '12 at 18:35
  • @Dodgyrabbit has there been enough information in this thread to assure you that those multiple changes are visible to all other threads/cores upon the exit of the lock block? – Peter Ritchie Jul 23 '12 at 19:56