1

The standard says that a relaxed atomic operation is not a synchronization operation. But what's atomic about an operation result of which is not seen by other threads.

The example here wouldn't give the expected result then, right?

What I understand by synchronization is that the result of an operation with such trait would be visible by all threads.

Maybe I don't understand what synchronization means. Where's the hole in my logic?

Hrisip
  • 900
  • 4
  • 13
  • 2
    Not synchronizing with respect to _other_ operations. The (result of the) operation itself is always observed by other threads. – LWimsey Mar 20 '19 at 20:06
  • @LWimsey so synchronization is achieved with memory barriers(but it's called ordering...)? It's not clear what you mean by synchronization with _other_ operations. – Hrisip Mar 20 '19 at 20:26
  • "Synchronize-with" has a specific technical meaning in C++, in terms of creating a "happens-before" relationship between earlier code in one thread (e.g. before a release store) and later code in another thread (often with an acquire load). https://preshing.com/20120913/acquire-and-release-semantics/ – Peter Cordes Jul 21 '22 at 17:51

3 Answers3

4

The compiler and the CPU are allowed to reorder memory accesses. It's the as-if rule and it assumes a single-threaded process.

In multithreaded programs, the memory order parameter specifies how memory accesses are to be ordered around an atomic operation. This is the synchronization aspect (the "acquire-release semantics") of an atomic operation that is separate from the atomicity aspect itself:

int x = 1;
std::atomic<int> y = 1;

    // Thread 1
    x++;
    y.fetch_add(1, std::memory_order_release);

    // Thread 2
    while ((y.load(std::memory_order_acquire) == 1)
    { /* wait */ }
    std::cout << x << std::endl;  // x is 2 now

Whereas with a relaxed memory order we only get atomicity, but not ordering:

int x = 1;
std::atomic<int> y = 1;

    // Thread 1
    x++;
    y.fetch_add(1, std::memory_order_relaxed);

    // Thread 2
    while ((y.load(std::memory_order_relaxed) == 1)
    { /* wait */ }
    std::cout << x << std::endl;  // x can be 1 or 2, we don't know

Indeed as Herb Sutter explains in his excellent atomic<> weapons talk, memory_order_relaxed makes a multithreaded program very difficult to reason about and should be used in very specific cases only, when there is no dependency between the atomic operation and any other operation before or after it in any thread (very rarely the case).

rustyx
  • 80,671
  • 25
  • 200
  • 267
0

Yes, standard is correct. Relaxed atomics are not synchronization operation, as only atomicity of operation is guaranteed.

For example,

int k = 5;
void foo() {
    k = 10;
}

int baz() {
    return k;
}

In presence of multiple threads, the behavior is undefined as it exposes race condition. In practice on some architectures it could happen that a caller of baz would see nor 10, no 5, but some other, indeterminate value. It is often called torn or dirty read.

If a relaxed atomic load and store was used instead baz would be guaranteed to return either 5 or 10, as there would be no data race.

It is worth noting that for practical purposes, Intel chips and their very strong memory model make relaxed atomic a noop (meaning there is no extra cost for it being atomic) on this common architecture, as loads and stores are atomic on hardware level.

SergeyA
  • 61,605
  • 5
  • 78
  • 137
  • On X86, compared to non-atomic operations, there is extra cost for (relaxed) atomic operations because a compiler cannot apply the same level of optimization. For example, an atomic store is always committed to L1-cache (for visibility reasons) while it could stay inside a CPU register if non-atomic. – LWimsey Mar 23 '19 at 19:30
  • @LWimsey not true. – SergeyA Mar 25 '19 at 14:37
0

Suppose we have

std::atomic<int> x = 0;

// thread 1
foo();
x.store(1, std::memory_order_relaxed);

// thread 2
assert(x.load(std::memory_order_relaxed) == 1);
bar();

There is, first of all, no guarantee that thread 2 will observe the value 1 (that is, the assert may fire). But even if thread 2 does observe the value 1, while thread 2 is executing bar(), it might not observe side effects generated by foo() in thread 1. And if foo() and bar() access the same non-atomic variables, a data race may occur.

Now suppose we change the example to:

std::atomic<int> x = 0;

// thread 1
foo();
x.store(1, std::memory_order_release);

// thread 2
assert(x.load(std::memory_order_acquire) == 1);
bar();

There is still no guarantee that thread 2 observes the value 1; after all, it could happen that the load occurs before the store. However, in this case, if thread 2 observes the value 1, then the store in thread 1 synchronizes with the load in thread 2. What this means is that everything that's sequenced before the store in thread 1 happens before everything that's sequenced after the load in thread 2. Therefore, bar() will see all the side effects produced by foo() and if they both access the same non-atomic variables, no data race will occur.

So, as you can see, the synchronization properties of operations on x tell you nothing about what happens to x. Instead, synchronization imposes ordering on surrounding operations in the two threads. (Therefore, in the linked example, the result is always 5, and does not depend on the memory ordering; the synchronization properties of the fetch-add operations don't affect the effect of the fetch-add operations themselves.)

Brian Bi
  • 111,498
  • 10
  • 176
  • 312
  • you say in the second example if both `bar()` and `foo()` access the same non-atomic, no data race will occur. Well, if they happen to change this non-atomic variable in the same time(which is possible?), then there's a race condition by definition – Hrisip Mar 20 '19 at 20:40
  • @DanA. I said that "everything that's sequenced before the store in thread 1 happens before everything that's sequenced after the load in thread 2" --- therefore things that happen in `foo()` are not potentially concurrent with things that happen in `bar()`. – Brian Bi Mar 20 '19 at 22:09