No, it is not possible in this case.
The reason is that the atomic functions you are calling, std::atomic<bool>::exchange
and std::atomic<bool>::store
,
both take a std::memory_ordering
as a second parameter, and the default value is std::memory_ordering_seq_cst
.
https://en.cppreference.com/w/cpp/atomic/atomic/exchange
https://en.cppreference.com/w/cpp/atomic/atomic/store
For info about what this means, you can see:
https://en.cppreference.com/w/cpp/atomic/memory_order
Seq Cst (sequential consistency) is the most conservative memory ordering, and is the most commonly recommended one to use, which is why it is the default.
The way the standard is written, inter-thread synchronization is defined in terms of a number of formal properties between different statements in your program, called "synchronizes with", "happens before", and "carries dependency". Informally, as long as other threads lock and unlock your lock appropriately, any side-effects that happen in the critical region "happen before" unlocking, and locking "happens before" those changes. And whenever someone locks or unlocks the lock, that has a "synchronizes with" relationship with other locking operations.
(Usually, the standard doesn't talk about what the compiler is allowed to reorder, because that's too low-level, and they don't like to mandate implementation details. Instead they try to define what counts as "observable behavior" and what is undefined behavior, and the compiler is allowed to make any optimizations that don't change the "observable behavior" of programs that do not have undefined behavior.)
The upshot is that the compiler is not allowed to move operations with memory side-effects across your locking and unlocking statements, because they would be changing the observable behavior of your program, as defined by the C++ standard, if they do that.
Note that the fact that SeqCst is being used here is critical -- if you were using "relaxed" atomics, then this would not be the case, and the compiler would be allowed to move the statements outside the critical section, and you would have a bug.
If you would like to understand the memory model ideas in more detail, I highly recommend Herb Sutters' Atomic Weapon's talk: https://www.youtube.com/watch?v=A8eCGOqgvH4
That talk goes into considerable detail with examples about what the memory model means and how it connects not only to the compiler, but to modern hardware architectures, and it also gives a lot of useful and practical advice about how to think about this and how to use it.
One of the main takeaways (IMO) is you should almost always just use the default of Sequential Consistency, and your programs and locks will generally just work the way you think they should as long you don't create races. You generally have to be doing something very low level where performance on the level of nanoseconds is critical before it starts to become worth it to start using a weaker memory ordering. It may also make sense to customize the std::memory_order
parameter if you are working on the standard library or something like this, where your lock will be used by an enormous number of other projects with a very wide range of needs. Outside of these cases, the performance benefits you will get from using something more complicated than SeqCst are usually not enough to justify the increased complexity of understanding and maintaining the code.