Say you have a counter that you want to use to keep track of how many times some operation is completed, incrementing the counter each time.
If you run this operation in multiple threads then unless the counter is std::atomic
or protected by a lock then you will get unexpected results, volatile
will not help.
Here is a simplified example that reproduces the unpredictable results, at least for me:
#include <future>
#include <iostream>
#include <atomic>
volatile int counter{0};
//std::atomic<int> counter{0};
int main() {
auto task = []{
for(int i = 0; i != 1'000'000; ++i) {
// do some operation...
++counter;
}
};
auto future1 = std::async(std::launch::async, task);
auto future2 = std::async(std::launch::async, task);
future1.get();
future2.get();
std::cout << counter << "\n";
}
Live demo.
Here we are starting two tasks using std::async
using the std::launch::async
launch policy to force it to launch asynchronously. Each task simply increments the counter a million times. After the two tasks are complete we expect the counter to be 2 million.
However, an increment is a read and write operation between reading the counter and writing to it another thread may have also written to it and increments may be lost. In theory, because we have entered the realm of undefined behaviour, absolutely anything could happen!
If we change the counter to std::atomic<int>
we get the behaviour we expect.
Also, say another thread is using counter
to detect if the operation has been completed. Unfortunately, there is nothing stopping the compiler from reordering the code and incrementing the counter before it has done the operation. Again, this is solved by using std::atomic<int>
or setting up the necessary memory fences.
See Effective Modern C++ by Scott Meyers for more information.