1

For mutex lock(), the standard mentions:

Prior unlock() operations on the same mutex synchronize-with (as defined in std::memory_order) this operation.

This answer tries to explain what synchronize-with means according to the standard. However, looks like the definition is not clearly specified.

My main question is, can I ever get this output:

x: 1
y: 2

for the following code due to memory reordering in thread A? Is the write on x in A guaranteed to be observed by B if B locks after A unlocks?

std::mutex mutex;
int x = 0, y = 0;

int main() {
  std::thread A{[] {
    x = 1;
    std::lock_guard<std::mutex> lg(std::mutex);
    y = 0;
  }};
  std::thread B{[] {
    std::lock_guard<std::mutex> lg(std::mutex);
    y = x + 2;
  }};

  A.join();
  B.join();
  std::cout << "x: " << x << std::endl;
  std::cout << "y: " << y << std::endl;
}

If not, based on what section of the standard? In other words, can we assume there is sequential consistency between lock/unlock?

I have also seen this related question but it is for separate mutexes.

Ari
  • 7,251
  • 11
  • 40
  • 70
  • 2
    Your code has UB since reads from `x` are not exclusive with writes to `x`. So no reasoning can be made about memory ordering, I don't think. Once UB is allowed, standard doesn't apply anymore. But once you fix that problem, then certainly things will be defined and `1 2` output won't be possible. 1.10/10 wouldn't hold otherwise. – Kuba hasn't forgotten Monica Jun 03 '20 at 03:01
  • If there is a race between states S1 and S2, why can't we reason about being in state S3 (i.e 1 2) or not? This question does that i think: https://stackoverflow.com/questions/62164376/are-lock-and-unlock-on-the-same-mutex-sequential-consistent/62169145?noredirect=1# – Ari Jun 04 '20 at 14:16

1 Answers1

1

The synchronize-with relation is clearly defined. The standard states the following:

Certain library calls synchronize with other library calls performed by another thread. For example, an atomic store-release synchronizes with a load-acquire that takes its value from the store. [...] [ Note: The specifications of the synchronization operations define when one reads the value written by another. For atomic objects, the definition is clear. All operations on a given mutex occur in a single total order. Each mutex acquisition "reads the value written" by the last mutex release. — end note ]

And further:

An atomic operation A that performs a release operation on an atomic object M synchronizes with an atomic operation B that performs an acquire operation on M and takes its value from any side effect in the release sequence headed by A.

So in other words, if an acquire operation A "sees" the value stored by a release operation B, then A synchronizes-with B.

Consider a spin-lock where you only need a single atomic bool flag. All operations operate on that flag. In order to acquire the lock you have set the flag with an atomic read-modify-write operation. All modifications on an atomic object are totally ordered by the modification order, and it is guaranteed that a RMW operation always reads the last value (in the modification order) written before the write associated with that RMW operation.

Due to this guarantee, it is sufficient to use acquire/release semantics for the lock/unlock operations, because a successful lock operation always "sees" the value written by the previous unlock.

Regarding your question:

Is the write on x in A guaranteed to be observed by B if B locks after A unlocks?

The important part is the "if B locks after A unlocks"! If that is guaranteed, then yes, B's lock operation synchronizes-with A's unlock, thereby establishing a happens-before relation. Thus B will observe A's write. However, your code does not provide the guarantee that B locks after A, so you have a potential data race which would result in undefined behavior as correctly pointed out by @ReinstateMonica.

Update
The write to x is sequenced-before A's unlock. It doesn't matter whether the operation is outside (before) the mutex or not. In fact, theoretically the compiler could reorder the operation so that it ends up inside the mutex (though this is rather unlikely). Sequenced-before is also part of the happens-before definition, so we have the following:

std::thread A{[] {
    x = 1; // a
    std::lock_guard<std::mutex> lg(std::mutex);
    y = 0;
    // implicit unlock: b
  }};
  std::thread B{[] {
    std::lock_guard<std::mutex> lg(std::mutex); // c
    y = x + 2;
  }};

Assuming that B locks after A unlocks we have:

  • a is sequenced-before b -> a happens-before b
  • b synchronizes-with c -> b happens-before c

And since the happens-before relation is transitive it follows that a happens-before c. So yes, this is true for all operations that are sequenced before A's unlock - regardless whether they are inside the lock or not.

mpoeter
  • 2,574
  • 1
  • 5
  • 12
  • Notice that A changes x before the critical section. So can we say if B locks after A, it will see all changes made by A? Both changes A makes before and inside the critical section? – Ari Jun 03 '20 at 14:45
  • Yes. I have updated my answer to make this more clear. – mpoeter Jun 03 '20 at 14:59