0

Given shared data protected by a Mutex. What is the appropriate way to read part of the shared data without needing to lock the Mutex? Is using std::atomic_ref an appropriate way as indicated in the example below?

struct A
{
  std::mutex mutex;
  int counter = 0;
  void modify()
  {
    std::lock_guard<std::mutex> guard(mutex);
    // do something with counter
  }
  int getCounter()
  {
    return std::atomic_ref<int>(counter).load();
  }
};
hpc64
  • 35
  • 5
  • 1
    Constructing a single temporary `atomic_ref` is absolutely useless. What specific practical thing are you trying to do? Why don't you want to lock the mutex, and what purpose do you expect the mutex to serve? And what safety guarantees do you need? – Sneftel Jan 12 '21 at 22:38
  • Think about it: would it possibly work with a `long long long int` that isn't atomic anywhere? It *clearly* would not. – curiousguy Jan 15 '21 at 11:20

1 Answers1

0

If you bypass locking the mutex and perform atomic reads from the shared data (for example using std::atomic_ref), then your program will be invoking undefined behavior if one of the other thread writes using a non-atomic access.

If all threads use atomic operations to access the shared data, then there is no undefined behavior. However, in that case, there is probably no point in protecting the shared data with a mutex, if all accesses are atomic anyway.

Andreas Wenzel
  • 22,760
  • 4
  • 24
  • 39
  • Thanks for your answer. But doesn't locking and unlocking a mutex imply a memory ordering that will be visible by the atomic access? – hpc64 Jan 12 '21 at 23:02
  • @hpc64: The concept of [memory ordering](https://en.wikipedia.org/wiki/Memory_ordering) only applies to operations in the same thread. In your example, one thread uses atomic access, whereas another thread uses a mutex. You must either make all accesses to the shared data atomic, or you must use some form of thread synchronization (for example a mutex). Otherwise you will have undefined behavior, unless all accesses to the shared data by all threads are read-only. – Andreas Wenzel Jan 13 '21 at 02:20
  • @Andreas_Wenzel: Alok Save's answer in https://stackoverflow.com/questions/11172922/does-stdmutex-create-a-fence would mean that a mutex lock/unlock adds fences which are then visible by the atomic access. – hpc64 Jan 14 '21 at 16:26
  • @hpc64: According to the answer you are referencing, a mutex lock/unlock operation is not a fence. Only synchronization operations without an associated memory location are fences. Synchronization operations with an associated memory location (such as mutex lock/unlock operations and atomic operations) are aquire/release "operations", not "fences". Therefore, mutex lock/unlock operations and atomic operations will only influence the visibility of other threads when both threads modify the same memory location with the lock/unlock/atomic operation (which does not apply in your case). – Andreas Wenzel Jan 14 '21 at 19:25
  • @hpc64: Even if all mutex locking/unlocking operations were acquire/release fences, you would still have the problem that while Thread A performs a non-atomic write to the shared data after locking the mutex, Thread B's could perform an atomic read from the shared data. Since Thread A's write is non-atomic, at the time of Thread B's atomic read, the shared data could be in an inconsistent state, because Thread A has not finished its write operation. [continued in next comment] – Andreas Wenzel Jan 14 '21 at 19:28
  • @hpc64: [continued from previous comment]: For example, half of the bytes of the shared data could reflect the new state and half of the bytes of the shared data could represent the old state. That is why this is undefined behavior. You must either make all thread operations atomic or make all threads use a mutex. A mixture of both is not meaningful and will invoke undefined behavior. – Andreas Wenzel Jan 14 '21 at 19:28
  • @Andreas_Wenzel: One comment there is 'Since mutexes also establish synchronizes with relationships, their effects are the same [as fences]'. In case in the specific case above counter (int) is written atomically there would be no danger of inconsistency of the counter value. Using atomic<> for counter would be the appropritate way. The question was rather if it could be omitted by using atomic_ref for accessing it without mutex locking. – hpc64 Jan 17 '21 at 00:38
  • @hpc64: In your question, you are talking about an atomic read. However, in your previous comment, you are now instead talking about an atomic write. If one thread performs an atomic write on the counter variable while another thread is performing a non-atomic read of the counter variable, then there is the possibility that the reader will perform half the read before the atomic write and perform the other half of the read after the atomic write, which will cause the reader to get inconsistent data. This problem would not exist of both the read and the write operation were atomic. – Andreas Wenzel Jan 17 '21 at 04:47
  • @hpc64: Also, if two threads write to the shared data at the same time, and one of the writes is atomic and the other is not, then it is possible that the atomic write will take place at a time where the non-atomic write is only half finished. This will cause the non-atomic write to overwrite half of the data after the atomic write is completed, leaving the other half intact. Afterwards, the shared data will be in an inconsistent state. That is why it is undefined behavior. – Andreas Wenzel Jan 17 '21 at 04:56
  • @hpc64: When Thread A unlocks a mutex, this operation has release semantics. This means that reads and writes that are scheduled to occur before unlocking the mutex cannot be reordered to occur after unlocking the mutex. When Thread B locks the same mutex, this operation has acquire semantics. This means that reads and writes that are scheduled to occur after locking the mutex cannot be reordered to occur before locking the mutex. As a consequence, it is guaranteed that both threads will not be accessing the shared data at the same time, provided both threads use the mutex properly. – Andreas Wenzel Jan 17 '21 at 05:16
  • @hpc64: As shown in my previous comment, the memory order (acquire/release semantics) only directly affects the ordering of instructions in the current thread/CPU. However, when used properly in conjunction with a thread synchronization point (such as a mutex), the memory order also indirectly affects the visibility of changes made by other threads. The problem in your case is that you do not even have a thread synchronization point, as this would require two threads to use the same mutex or for both threads to perform an atomic access on the same memory location. – Andreas Wenzel Jan 17 '21 at 06:20
  • @hpc64: Acquire and release fences only affect the memory order. However, as pointed out in my previous comment, the memory order is not the problem. Therefore, trying to solve the problem with fences would not be meaningful. – Andreas Wenzel Jan 17 '21 at 06:36
  • @hpc64: If you want to read exactly what the C++ standard says about thread synchronization and memory order, you can look at section 6.9.2.1 of the [ISO C++20 standard](https://isocpp.org/files/papers/N4860.pdf). – Andreas Wenzel Jan 17 '21 at 07:32
  • @Andreas_Wenzel: Under the assumption that the int is written atomically and the mutex unlock is a release fence the change becomes visible at the atomic load, no? – hpc64 Jan 19 '21 at 09:59
  • @hpc: In your previous comment, you are talking about 3 synchronization operations: atomic read, atomic write and mutex unlock. Please clarify which thread performs which of these 3 operations. Also, are both atomic operations to the same memory location, for example to the counter? This situation is very different from the one mentioned in your original question, because you are now talking about two atomic operations. – Andreas Wenzel Jan 19 '21 at 11:01
  • @hpc64: If Thread A performs an atomic write to the counter and then unlocks the mutex, and if Thread B then performs an atomic read from the counter without locking the mutex beforehand, then the following will apply: The two atomic operations will be indeterminately sequenced, which means that one will occur before the other, but which one is unspecified. Therefore, it is not guaranteed that Thread A's atomic write will be visible in Thread B at the time of its atomic read. – Andreas Wenzel Jan 19 '21 at 12:01
  • @hpc64: If, however, Thread B locks the mutex before performing the atomic read, then Thread A's unlock will synchronize with Thread B's lock, and Thread A's write will be visible when Thread B reads. This is guaranteed irrespective of whether the read and write is atomic. – Andreas Wenzel Jan 19 '21 at 12:15