8

I was reading this MSDN article on lockless thread syncing. The article seems to infer that as long as you enter a lock before accessing shared variables, then those variables will be up to date (in .Net 2.0 at least).

I got to thinking how this was possible? A lock in .Net is just some arbitrary object that all threads check before accessing memory, but the lock itself has no knowledge of the memory locations that are being accessed.

If I have a thread updating a variable, or even a whole chunk of memory, How are those updates guaranteed to be flushed from CPU caches when entering / exiting a lock? Are ALL memory accesses effectively made volatile inside the lock?

GazTheDestroyer
  • 20,722
  • 9
  • 70
  • 103
  • It is not an area I have much experience with but why does it matter whether the memory location you are accessing is in the CPU cache or not. – Ben Robinson Oct 20 '11 at 09:18
  • @BenRobinson - Suppose you have 2 threads running on different cores accessing 1 integer on the heap. In that case each thread might store a copy of the value in the local core's cache unless appropriate synchronization methods are being applied. – Polity Oct 20 '11 at 09:23

4 Answers4

6

Check the work of Eric Lippert: http://blogs.msdn.com/b/ericlippert/archive/2011/06/16/atomicity-volatility-and-immutability-are-different-part-three.aspx

Locks guarantee that memory read or modified inside the lock is observed to be consistent, locks guarantee that only one thread accesses a given hunk of memory at a time, and so on.

So yes, as long as you lock each time before accessing shared resources, you can be pretty sure its up to date

EDIT look up the following post for more information and a very usefull overview: http://igoro.com/archive/volatile-keyword-in-c-memory-model-explained/

Polity
  • 14,734
  • 2
  • 40
  • 40
  • 3
    Interesting links, thanks. Didn't realise all c# writes are volatile! Second link answers my question. Memory accesses are not volatile within the lock, but releasing a lock flushes writes, and obtaining a lock flushes the read cache, thus values are current. – GazTheDestroyer Oct 20 '11 at 09:40
  • Does this refer to all kind of locks that .net provices, including Mutex, Monitor, ReaderWriterLock, and so on? – Tobias Knauss Aug 14 '18 at 13:41
1

Well, the article explains it:

  1. Reads cannot move before entering a lock.

  2. Writes cannot move after exiting a lock.

And more explanation from the same article:

When a thread exits the lock, the third rule ensures that any writes made while the lock was held are visible to all processors. Before the memory is accessed by another thread, the reading thread will enter a lock and the second rule ensures that the reads happen logically after the lock was taken.

Petar Ivanov
  • 91,536
  • 11
  • 82
  • 95
1

Not all c# memory reads and writes are volatile, no. (imagine if that was the case performance-wise!)

But.

How are those updates guaranteed to be flushed from CPU caches when entering / exiting a lock

CPU caches are CPU specific, however they all have some form of memory coherence protocol. That is to say, when you access some memory from a core, if it is present in another core cache, the protocol the CPU uses will ensure that the data gets delivered to the local core.

What Petar Ivanov alludes to in his answer is however very relevant. You should check out memory consistency model if you want to understand more what his point is.

Now, how C# guarantees that the memory is up-to-date is up to the C# implementers, and Eric Lippert's blog is certainly a good place to understand the underlying issues.

Bahbar
  • 17,760
  • 43
  • 62
0

I’m not sure about the state of affairs in .NET, but in Java it is clearly stated that any two threads cooperating in such a way must use the same object for locking in order to benefit from what you say in your introductory statement, not just any lock. This is a crucial distinction to make.

A lock doesn’t need to “know” what it protects; it just needs to make sure that everything that has been written by the previous locker is made available to another locker before letting it proceed.

Ringding
  • 2,856
  • 17
  • 10