std::atomic
s are not only about consistent state of themselves, but also about consistent state in the surrounding code. Say for example, that you use an atomic integer to store the number of items in an array. You will probably end up writing something like the following:
std::atomic<int> len;
...
array[len] = some_new_object;
len++;
In another thread you would wait for len
to change and access the newly added object afterwards. For this to function properly it is crucial that the len++;
statement happens strictly after the statement before. Usually the compiler as well as the processor are allowed to reorder instructions, as long as the resulting effect is the same (according to the as-if rule). For interthread synchronization you want to restrict this reordering and that is exactly what the std::atomic
types do.
With memory_order_seq_cst
for example, the expression len++
being a read-modify-write access, will not allow any other instruction to be reordered with it. If you used memory_order_relaxed
, which does not restrict instruction reordering, the len
variable could end up being increased before the array[len] = some_new_object;
expression is completed. That is obviously not what you want in the example above.
So to conclude, in the example that you provided in your question, you might as well use memory_order_relaxed
(the atomicity of the operation is still guaranteed and the output you depicted will not happen). But as soon as you use the std::atomic
variable to actually signal some state between the threads, you should use memory_order_seq_cst
(which is the default one for good reason).