0

I have two threads that share a common variable.

The code structure is basically this (very simplified pseudo code):

static volatile bool commondata;

void Thread1()
{
   ...
   commondata = true;
   ...
}

void Thread2()
{
   ...
   while (!commondata)
   {
      ...
   }
   ...
}

Both threads run and at some point Thread1 sets commondata to true. The while loop in Thread2 should then stop. The important thing here is that Thread2 "sees" the changement made to commondata by Thread1.

I know that the naive method using a volatile variable is not correct and is not guaranteed to work on every platform.

Is it good enough to replace volatile bool commondata with std::atomic<bool> commondata?

curiousguy
  • 8,038
  • 2
  • 40
  • 58
Jabberwocky
  • 48,281
  • 17
  • 65
  • 115

2 Answers2

3

Simple answer: yes! :)

All operations on atomics are data race free and by default sequentially consistent.

mpoeter
  • 2,574
  • 1
  • 5
  • 12
  • 2
    If the OP wants the "relaxed" memory ordering semantics you get from `volatile bool` (in practice on real hardware), use std::atomic with `memory_order_relaxed`. [When to use volatile with multi threading?](https://stackoverflow.com/a/58535118) explains this. – Peter Cordes May 27 '20 at 21:12
  • 1
    Yes, but you have to be a bit careful if there are other volatile variables involved as well. `volatile` operations are relaxed with respect to non-volatile operations, but strictly ordered with respect to other `volatile` operations. relaxed atomic operations can be freely reordered with both, non-atomic operations as well as other relaxed atomic operations. – mpoeter May 28 '20 at 11:53
0

There is a nice caveat here. While generally 'yes', std::atomic does not, by itself, make the variable volatile. Which means, if compiler can (by some unfathomable means) infer that the variable did not change, it is allowed to not re-read it, since it might assume reading has no side-effects.

If you check, there are overloads for both volatile and non-volatile versions of the class: http://eel.is/c++draft/atomics.types.generic

That could become important if atomic variable lives in shared memory, for example.

SergeyA
  • 61,605
  • 5
  • 78
  • 137
  • Would `volatile std::atomic commondata` be OK? – Jabberwocky May 27 '20 at 15:55
  • @Jabberwocky, yes, this would be the iron-clad replacement. In all but the most corner cases a straightforward `std::atomic<>` would do equally well. – SergeyA May 27 '20 at 16:01
  • 2
    @Jabberwocky -- if there are **other reasons** to make the variable volatile, then you have to do that. If the variable is used only between threads (as it will always be ), you don't need volatile. Don't go down a rabbit hole chasing fringe cases that aren't present in your requirements. – Pete Becker May 27 '20 at 16:47
  • 2
    I can hardly think of a case where the compiler can safely remove an atomic load, but there are other optimizations the compiler potentially _could_ perform as described in [N4455](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4455.html) and [P0062R1](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0062r1.html). This has been discussed in more detail here https://stackoverflow.com/questions/61604389/are-these-allowed-optimizations-in-c. – mpoeter May 27 '20 at 17:11
  • 2
    *That could become important if atomic variable lives in shared memory* - it's already assumed to be shared between threads of a process. A hypothetical compiler that needs `volatile atomic` to work in normal use-cases would be broken. – Peter Cordes May 27 '20 at 21:10
  • @PeterCordes I am not sure what drives the assumption that atomic variable is shared between threads of a process. It might as well be shared between different processes. – SergeyA May 28 '20 at 23:04
  • 1
    Sure, in which case code-gen that was safe for a pure ISO C++ program using only `std::thread` would also be safe for a program that used OS-specific stuff to map shared memory between processes. As ISO C++ says in a note, lock-free atomics should be address-free (so they work across shared memory). – Peter Cordes May 28 '20 at 23:06
  • Are you picturing that an implementation might do whole-program optimization and notice that the program doesn't start multiple threads, and in that case optimize atomic operations as if they were non-atomic? In that case `volatile` wouldn't be sufficient to restore correctness, especially for atomic RMWs. Just plain optimization of atomics to assume that a certain ordering always happened (i.e. that 2 loads happened together, not separated by a store from another thread), per the as-if rule, doesn't break most code. Compilers don't do that because of a few corner cases like mpoeter linked – Peter Cordes May 28 '20 at 23:12
  • @PeterCordes with volatile compiler won't be able to optimize any access (as if rule doesn't apply to volatile). It would not be surprised if this is exactly the use case designers had in mind when added volatile (it is extremely rare for library types to have volatile qualifiers). – SergeyA May 29 '20 at 00:53
  • Right, but my point is you don't need to stop the compiler from doing that optimization; anything it could legally do is already safe for use-cases like this, and in practice current compilers essentially treat `atomic` like `volatile atomic` pending further work by the standards committee on language support for indicating when atomics can safely be optimized. One of the major points of `atomic` is that it can't hoist the load out of the OP's loop, because that would violate the requirement that loads see stores in "a reasonable amount of time". – Peter Cordes May 29 '20 at 01:02
  • You could imagine a compiler unrolling by 4 and only checking the `keep_running` atomic flag once per iteration, but that doesn't make it unsafe. (Again, in practice current compilers don't do that.) I think the original use-case for `volatile atomic` was for MMIO registers or similar hardware access where reads, writes, and atomic RMWs have side-effects beyond what other threads see. Or perhaps it was imagined as being needed for progress-bar type use-cases to prevent sinking stores out of a loop, as discussed in N4455 and P0062R1, before it was decided that any optimization as problematic – Peter Cordes May 29 '20 at 01:04