When should I use volatile/Thread.MemoryBarrier() for thread safety?
4 Answers
You use volatile
/Thread.MemoryBarrier()
when you want to access a variable across threads without locking.
Variables that are atomic, like an int
for example, are always read and written whole at once. That means that you will never get half of the value before another thread changes it and the other half after it has changed. Because of that you can safely read and write the value in different threads without syncronising.
However, the compiler may optimize away some reads and writes, which you prevent with the volatile
keyword. If you for example have a loop like this:
sum = 0;
foreach (int value in list) {
sum += value;
}
The compiler may actually do the calculations in a processor register and only write the value to the sum
variable after the loop. If you make the sum
variable volatile
, the compiler will generate code that reads and writes the variable for every change, so that it's value is up to date throughout the loop.

- 687,336
- 108
- 737
- 1,005
-
1+1, but in which scenarios does it matter that the compiler will generate code that reads and writes the variable for every change? – dtb Aug 25 '09 at 20:24
-
That's mostly desired when threading dtb, if not one thread may get a different value than the other and all kinds of crazy things could happen. – Skurmedel Aug 25 '09 at 20:25
-
Well, that or use a memory barrier. – Skurmedel Aug 25 '09 at 20:26
-
Sure, but I'm looking for a real-world scenario :) I can't come up with any except the last example in my answer. – dtb Aug 25 '09 at 20:27
-
1"Variables that are atomic, like an int for example..." And for a counter-example: setting a long, double or decimal variable on an x86 machine is *not* thread-safe. Edit: see "Thread Safety" section at the bottom http://msdn.microsoft.com/en-us/library/system.int64.aspx – Serguei Apr 12 '11 at 02:23
What's wrong with
private static readonly object syncObj = new object();
private static int counter;
public static int NextValue()
{
lock (syncObj)
{
return counter++;
}
}
?
This does all necessary locking, memory barriers, etc. for you. It's well understood and more readable than any custom synchronization code based on volatile
and Thread.MemoryBarrier()
.
EDIT
I can't think of a scenario in which I'd use volatile
or Thread.MemoryBarrier()
. For example
private static volatile int counter;
public static int NextValue()
{
return counter++;
}
is not equivalent to the code above and is not thread-safe (volatile
doesn't make ++
magically become thread-safe).
In a case like this:
private static volatile bool done;
void Thread1()
{
while (!done)
{
// do work
}
}
void Thread2()
{
// do work
done = true;
}
(which should work) I'd use a ManualResetEvent to signal when Thread2 is done.

- 213,145
- 36
- 401
- 431
-
Nothing 'wrong' with your code. I'm trying to grasp the circumstances in which volatile & MemoryBarrier apply. As in Jon Skeet's answer here: http://stackoverflow.com/questions/395232/we-need-to-lock-a-net-int32-when-reading-it-in-a-multithreaded-code. – Alex Aug 25 '09 at 20:04
-
.net also has quite a strict memory model, such a read or write cannot follow another write. – Chris Chilvers Aug 25 '09 at 20:07
-
4Though the ECMA spec has a fairly weak memory model, so you might want to watch out for that, see http://www.bluebytesoftware.com/blog/2007/11/10/CLR20MemoryModel.aspx and http://blogs.msdn.com/cbrumme/archive/2003/05/17/51445.aspx – Chris Chilvers Aug 25 '09 at 20:18
-
It would be if you only did retrieves or assignments though (speaking of second example code) since they are atomic. But yeah, very limited use. The only place I could think of is bool fields. – Skurmedel Aug 25 '09 at 20:19
Basically if you're using any other kind of synchronization to make your code threadsafe then you don't need to.
Most of the lock mechanisms (including lock) automatically imply a memory barrier so that multiple processor can get the correct information.
Volatile and MemoryBarrier are mostly used in lock free scenarios where you're trying to avoid the performance penalty of locking.
Edit: You should read this article by Joe Duffy about the CLR 2.0 memory model, it clarifies a lot of things (if you're really interested you should read ALL the article from Joe Duffie who is by large the most expert person in parallelism in .NET)

- 16,977
- 9
- 65
- 97

- 51,063
- 11
- 80
- 130
-
+1, but just so it's absolutely clear. If you do what dtb does, volatile/Thread.MemoryBarrier is not needed. There are also situations where volatile alone on a member isn't enough. – Skurmedel Aug 25 '09 at 20:08
As the name implies volatile guarantees that cache value are flushed to memory so that all the threads see the same value. For example, if I have an integer whose latest write is saved in the cache, other threads may not see that. They may even see their cache copy of that integer. Marking a variable as volatile makes it to be read directly from the memory.
Sriwantha Sri Aravinda Attanayake

- 7,694
- 5
- 42
- 44