A lock block includes a memory fence at the start and at the finish (start and end of the block). This ensures that any changes to memory are visible to other cores (e.g. other threads running on other cores). In your example, changes to x, y, z in your first lock block will be visible to any other threads. "Visible" means any values cached into a register will be flushed to memory, and any memory cached in the CPUs' cache will be flushed to physical memory. ECMA 334 details that a lock block is a block surrounded by Monitor.Enter and Monitor.Exit. Further, ECMA 335 details that Monitor.Enter "shall implicitly perform a volatile read operation..." and Monitor.Exit "implicitly perform a volatile write operation. This does mean that the modifications won't be visible to other cores/threads until the end of the lock block (after the Monitor.Exit), but if all your access to these variables are guarded by a lock, there can be no simultaneous access to said variables across different cores/threads anyway.
This effectively means that any variables guarded by a lock statement do not need to be declared as volatile in order to have their modifications visible to other threads.
Since the example code only contains an operation that relies on a single shared atomic operation (read and write of a single value to y) You could get the same results with:
try
{
x = 10;
y = 20;
Thread.VolatileWrite(ref z, a + 10);
}
and
if(y == 10)
{
// ...
}
The first block guarantees that the write to x is visible before the write to y and the write to y is visible before the write to z. It also guarantees that if the writes to x or y were cached in the CPUs cache that that cache would be flushed to physical memory (and thus visible to any other thread) immediately after the call to VolatileWrite.
If within the if(y == 10)
block you do something with x
and y
, you should return to using the lock
keyword.
Further, the following would be identical:
try
{
x = 10;
y = 20;
Thread.MemoryBarrier();
z = a + 10;
}