3

I have a volatile bool 'play' flag in my class that is being set by one thread and being read by another thread.

Do I need to synchronize the calls to that flag? for example in this function:

void stop() 
{
   play = false;
}

In windows they have the _InterlockedExchange and OSX has OSAtomicAdd64Barrier, I've seen those functions being used with shared primitives, do I need them?

Thank you

kambi
  • 3,291
  • 10
  • 37
  • 58

3 Answers3

3

Yes, and volatile does not in any way imply thread safe or atomic. Use std::mutex and std::unique_lock etc if you can, not the platform specifics. std::atomic is good choice -- maybe even best in this case.

Brandon
  • 724
  • 5
  • 12
  • Are you sure? Care to show how a data race can occur in this specific situation? – amit Dec 26 '12 at 08:12
  • Unless you're checking the value of `play` in a loop in the thread reading the value, you will miss the update at some point unless you synchronize it somehow. `volatile` is almost orthogonal to atomicity in what it implies. See here for another answer: http://stackoverflow.com/questions/8819095/concurrency-atomic-and-volatile-in-c11-memory-model – Brandon Dec 26 '12 at 08:15
  • I understand what volatile is, but assuming the master thread is NOT reading `play`, and workers are NOT writing `play` (this is what I understand from the question) - I fail to see how a race condition can occur (though I cannot think of a way to proof it won't occur as well). You said it is still unsafe -> a data race might occur, I am looking for that killer case that will prove your answer is correct. – amit Dec 26 '12 at 08:18
  • 1
    If you're not concerned with knowing _exactly_ when the flag has been updated, then you don't have to use atomics / synchronization. However the standard makes no guarantee as to when the other thread will see the updated value other than that it will at some point. Here it probably doesn't matter so much. It really depends on what your reader thread(s) is / are doing. – Brandon Dec 26 '12 at 08:22
  • 1
    @amit: `[intro.multithread]/4`: "Two expression evaluations conflict if one of them modifies a memory location (1.7) and the other one accesses or modifies the same memory location." and `[intro.multithread]/21`: "The execution of a program contains a data race if it contains two conflicting actions in dierent threads, at least one of which is not atomic, and neither happens before the other. Any such data race results in undefined behavior.". – Mankarse Dec 26 '12 at 08:24
  • `volatile` doesn't ensure hardware level synchronization (at least in the implementations I'm familiar with), which means that it's quite possible that the hardware will not actually go to global memory for each read. (IMHO, this is contrary to the intent of `volatile`, but it does correspond to what most compilers generate.) And of course, `volatile` in no way ensures order with respect to other accesses. If the volatile variable is the _only_ element in memory your program ever accesses, it might work, but only then. – James Kanze Dec 26 '12 at 10:53
1

Depends:

If you are on a CPU that has total store order memory model (eg x86 / x64) or any machine with only 1 CPU core, then the answer to your question is no given that you state that only 1 thread writes to the flags, and barrier directives are probably optimized away anyway on x86. If you compile the same code on a CPU that has a relaxed memory model, then the situation changes, and you may then find code that works perfectly on an x86 develops some bizarre and difficult to reproduce bugs if you compile and run it on an ARM or PPC for example

The volatile directive prevents the write being cached somewhere, and may mean that the reading thread sees the write much sooner than it would in the volatile directive weren't there. Whether you want to use this depends on how important this interval is

camelccc
  • 2,847
  • 8
  • 26
  • 52
1

It depends on whether there is any other data that is shared between the threads. The problem with threads is that one thread may see writes from a different thread in a different order than the thread originally made them in. In that case you need to use some kind of _Interlocked*/Atomic function or locks (in both threads), they guarantee that all changes made before the flag become visible to the other thread.

If there is no other shared data (or only read-only shared data), or you are running on x86, using just volatile should also work. However that it works is in a sense only accidentally, and not guaranteed by any standard, so if your platform supports it it is still advised to use some form of Atomic/interlocked/etc. In the future (i.e. once there is good compiler support) you should use C++11's std::atomcic as that will be portable between platforms.

You should not be using a bare non-volatile variable, if you do that the compiler may decide to optimize the check away. volatile has very little to do with caching as camelccc suggests.

JanKanis
  • 6,346
  • 5
  • 38
  • 42
  • if only 1 thread is writing, he doesn't need an atomic variable, and if performance matters there may be very good reasons not to use one. If you want to be portable across non x86 CPU's, you may need a memory barrier in the writing thread, and volatile is guaranteed to imply that on a Microsoft compiler (regardless of CPU), though not gcc. – camelccc Feb 10 '13 at 00:08