1

An atomic variable (128-bit structure in this case) is being updated, to the surprise of the only thread that would have the ability to update it. How so?

This is a minimal example so it doesn't do anything that makes sense, but: an alloc() function returns a malloc'd buffer 100 times, then allocates a new buffer it will return 100 times, and so on, even in the face of being called with multiple threads.

I have an atomic variable, which is a structure with a pointer, a 32-bit int, and another 32-bit counter meant to avoid ABA problems.

I have a function with two sections. The first section will, if the return count is non-zero, CAS the struct to decrement the return count (and increment the ABA counter), then return the pointer. Otherwise, the second section gets a mutex, allocates memory for a new pointer, and CAS's the little struct completely with the new pointer, a new non-zero return counter, and again an increment to the ABA counter.

In short, every thread can update this struct when the counter is above zero. But once it's zero, the first thread to avquire the mutex will, I think, be the only thread that can again CAS update this struct.

Except sometimes this CAS fails! "How can it fail" is my question.

Here is a running example. It can be compiled with g++ lockchange.cxx -o lockchange -latomic -pthread . It runs on gcc version 9.2.1 20190827 (Red Hat 9.2.1-1) (GCC) on Fedora 31.

#include <algorithm>
#include <atomic>
#include <chrono>
#include <cassert>
#include <cstring>
#include <mutex>
#include <thread>
#include <vector>

using namespace std;


struct MyPair { /* Hungarian: pair */

    char*    pc;         /* a buffer to be used n times */
    int32_t  iRemaining; /* number of times left to use pc */
    uint32_t iUpdates;   /* to avoid ABA problem */
};



const int iThreads{ 200 };
const int iThreadIterations{ 1000000 };
const int iSizeItem{ 128 };

mutex mux;

atomic<MyPair> pairNext;



char* alloc() {

 TRY_AGAIN:
  MyPair pairCur = pairNext.load();

  // CASE 1: We can use the existing buffer?

  while ( pairCur.iRemaining ) {
      char* pcRV = pairCur.pc;

      MyPair pairNew = { pairCur.pc,
                         pairCur.iRemaining - 1,
                         pairCur.iUpdates + 1 };

      if ( pairNext.compare_exchange_weak( pairCur, pairNew ) )
          return pcRV;

      // Otherwise, pairNext was changed out from under us and pairCur
      // will have been updated.  Try again, as long as iRemaining
      // non-zero.
  }



  // CASE 2: We've used pc as many times as allowed, so allocate a new pc.

  // Get a mutex as we'll be changing too many fields to do atomically.
  lock_guard<mutex> guard( mux );

  // If multiple threads saw iRemaining = 0, they all will
  // have tried for the mutex; only one will have gotten it, so
  // there's a good chance that by the time we get the mutex, a
  // sibling thread will have allocated a new pc and placed it at
  // pairNext, so we don't need to allocate after all.

  if ( pairNext.load().iRemaining ) // <=============================== it's as if this line isn't seeing the update made by the line below in real time.
      goto TRY_AGAIN;

  // Get a new buffer.
  char* pcNew = (char*) malloc( iSizeItem );

  MyPair pairNew = { pcNew, 100, pairCur.iUpdates + 1 };

  if ( pairNext.compare_exchange_strong( pairCur, pairNew ) ) { //<===== the update that's not being seen above in real time
      // *** other stuff with pcNew that needs mutex protection ***;
      return pcNew;

  } else {

      // CASE 2c: after allocating a new page, we find that
      // another thread has beaten us to it.  I CAN'T FIGURE OUT
      // HOW THAT'S POSSIBLE THOUGH.  Our response should be safe
      // enough: put our allocation back, and start all over again
      // because who knows what else we missed.  I see this error
      // like 813 times out of 40 BILLION allocations in the
      // hammer test, ranging from 1 to 200 threads.

      printf( "unexpected: had lock but pairNext changed when iRemaining=0\n" );
      // In fact the following free and goto should and seem to
      // recover fine, but to be clear my question is how we can
      // possibly end up here in the first place.
      abort();
      free( pcNew );
      goto TRY_AGAIN;
  }
}



void Test( int iThreadNumber ) {

  for ( int i = 0; i < iThreadIterations; i++ )
      alloc();
}



int main( int nArg, char* apszArg[] ) {

  vector<thread> athr;

  for ( int i = 0; i < iThreads; i++ )
      athr.emplace_back( Test, i );

  for ( auto& thr: athr )
      thr.join();
}
Swiss Frank
  • 1,985
  • 15
  • 33
  • 1
    This isn't a [mcve]: it's not complete / reproducible because this fragment isn't in a function. And it doesn't show how it gets called from multiple threads. – Peter Cordes Jun 01 '20 at 08:33
  • 2
    Because no [mcve] we can't check things like does `lock_guard guard( mux );` exit scope correctly. – Richard Critten Jun 01 '20 at 08:46
  • 3
    And (2) in creating the above pseudo code you may have "fixed" the problem so no analysis of the posted code will help (ie the pseudo code may not be an accurate abstraction of the real code). – Richard Critten Jun 01 '20 at 08:54
  • 1
    Also, if you have this inside a mutex critical section, you could just use a relaxed atomic load / store and manual compare. Although since your struct is 16 bytes, a it might not be any more efficient that way. – Peter Cordes Jun 01 '20 at 10:20
  • 1
    Your `main` is a bit strange: it exits `main` without detaching or joining the threads, so it exits with `terminate called without an active exception` / aborted. I added a loop that does `athr[i].join();`, which leaves it running indefinitely printing messages. That's unrelated to your problem, but a test-case that crashes is suspicious and worth fixing. (Probably sleeping a few seconds and exiting would be better than waiting forever for threads that don't exit, though.) – Peter Cordes Jun 04 '20 at 01:46
  • 1
    https://godbolt.org/z/tAZbpw sleeps for 2 seconds then exits cleanly, demoing the problem. Oh, I see it's not supposed to be infinite, just a huge number of threads and iterations. 20 threads, 10000 iters, still demos the problem but exits much faster, reaching an assert fail. – Peter Cordes Jun 04 '20 at 02:01
  • OK, my problem isn't, joining threads or failed asserts (both artifacts of a rushed job to condense a huge amount of code into a tiny example), but the fact the "unexpected" printf() is ever reached. As long as iRemaining is non-zero, any thread can CAS in CASE 1. If any thread sees zero, it falls into CASE 2, where I believe only one thread at a time would be able to try the second CAS, and yet that second CAS fails about 1 in 5 million cases. Any explanation why, and what can be done about that? – Swiss Frank Jun 04 '20 at 04:11
  • 1
    I understand that, I'm just helping you clean up your test case so it's not a distraction. It was the first thing I noticed when I copy/pasted your MCVE and ran it. IDK if it's safe to use `goto` from after a mutex_lock to before it. From looking at the asm, I think that goto might unlock, otherwise it would deadlock from taking the lock twice. https://godbolt.org/z/OkBOfs reinits the mutex attribute to PTHREAD_MUTEX_ERRORCHECK so having the same thread try to retake the same mutex should detect an error instead of silently deadlocking. But I don't see any such errors. – Peter Cordes Jun 04 '20 at 04:30
  • 1
    I also fixed your assert to use `unsigned char`; another source false assertion fails was truncating to unsigned char for `memset`, but then *sign* extending (-128..127) to `int` for the compare against `iThreadNumber % 256`. Again, that's separate from your actual bug, but I wanted see if that check was finding real problems or not. (It is). – Peter Cordes Jun 04 '20 at 04:33
  • The MRE example now incorporates your suggestions, and I've supplied the solution to the question as an answer. Thanks all for your help! – Swiss Frank Jun 04 '20 at 07:24

2 Answers2

1

The problem is known as the "ABA Problem," which I could summarize as checking a variable in lock-free multithreaded coded and thinking it hasn't changed, but it has.

Here, iRemaining is a counter set to 100, then counted down to 0 repeatedly.

After the mutex is locked, an "optimization check" (not needed to ensure correctness, but merely to avoid the overhead of allocating a new buffer and resetting iRemaining etc., if another thread has done so) naively checks for iRemaining == 0 to determine that the structure pairCur hasn't changed during the aquisition of the lock (which may involve a long wait indeed).

What in fact is happening is that while thread A is waiting to get the lock, rarely, but given the billions of trials, quite a few times, iRemaining is being decremented an exact multiple of 100 times. By letting the code run to abort() then looking at the variables, I see that pairNext holds a value of say { pc = XXX, iRemaining = 0, iUpdates = 23700 } but pairNew is { pc = YYY, iRemaining = 100, iUpdates = 23600 }. iUpdates is now 100 higher than we thought! In other words, another 100 updates were made while we were waiting to lock, and that was the exact number to turn iRemaining to 0 again. That also means pc is different than before,

The structure already has an "update counter" iUpdates which is the standard solution to avoiding the ABA problem. If instead of checking for iRemaining == 0 we check for iUpdates to be the same as our pre-locking atomic snapshot, then the optimization heuristic becomes 100% effective and we never get to the unexpected printf() and abort(). (Well, it can maybe still happen, but now requires a thread to be blocked for an exact multiple of 2^32 operations, instead of only 100 operations, and which may only happen once a year, decade or century, if even possible on this architecture.) Here is the improved code:

  if ( pairNext.load().iUpdates != pairCur.iUpdates ) // <=============================== it's as if this line isn't seeing the update made by the line below in real time.
Swiss Frank
  • 1,985
  • 15
  • 33
  • 1
    It's not that rate; it happens with `iUpdate` counts in the low thousands on my system. I didn't notice your answer while I was writing up mine :/ Checking iUpdates makes some sense, but it's actually fine if you have an ABA `iRemaining = 0` as long as the CAS can succeed. You don't need to waste time unlocking and re-locking (or more likely letting another thread re-lock), just update your expected from memory before CAS. – Peter Cordes Jun 04 '20 at 08:06
1

Note that goto TRY_AGAIN; unlocks the mutex because you're jumping back to before lock_guard<mutex> was constructed. Usually people put {} around a scope with the lock-taking at the top to make this clear (and to control when the unlock happens). I didn't check ISO C++ rules to see if this is required behaviour, but at least the way G++ and clang++ implement it, goto does unlock. (Mixing RAII locking with goto seems like poor design).

Also note that you do reload pairNext once while holding the mutex, but discard that value and keep pairCur as the "expected" value for your CAS attempt.

For the CAS inside the critical section to be reached, pairNext.iRemaining either has to be

  • still zero (e.g. this thread won the race to take the lock). You're assuming this case where CAS succeeds because pairNext == pairCur.
  • or zero again after another thread or threads set iRemaining to 100 and decremented it all the way to zero while this thread was asleep. With more threads than cores, this can happen very easily. It's always possible even with lots of cores, though: an interrupt can block a thread temporarily, or its backoff strategy when it finds the mutex locks might lead it to not retry until the counter was zero again.

I added new debug code which make this clear:

 lock_guard<mutex> guard( mux );    // existing code

 if ( pairNext.load().iRemaining )
      goto TRY_AGAIN;

  // new debugging code
  MyPair tmp = pairNext.load();
  if (memcmp(&tmp, &pairCur, sizeof(tmp)) != 0)
          printf("pairNext changed between retry loop and taking the mutex\n"
                "cur  = %p, %d, %u\n"
                "next = %p, %d, %u\n",
                pairCur.pc, pairCur.iRemaining, pairCur.iUpdates,
                tmp.pc, tmp.iRemaining, tmp.iUpdates);
$ clang++ -g -O2 lc.cpp -o lockchange -latomic -pthread && ./lockchange 
pairNext changed between retry loop and taking the mutex
cur  = 0x7f594c000e30, 0, 808
next =  0x7f5940000b60, 0, 909
unexpected: had lock but pairNext changed when iRemaining=0
Aborted (core dumped)

Fixing this:

Since you're reloading pairNext with the mutex held, just use that value as your "expected" for the CAS. Compilers unfortunately won't optimize foo.load().member into loading just that member: they still load the whole 16-byte object with a lock cmpxchg16b on x86-64, or whatever on other ISAs. So you're paying the whole cost anyway.

  lock_guard<mutex> guard( mux );

  pairCur = pairNext.load();   // may have been changed by other threads
  if ( pairCur.iRemaining )
      goto TRY_AGAIN;

  // then same as before, use it for CAS
  // no other thread can be in the critical section, 
  // and the code outside won't do anything while pairNext.iRemaining == 0

A 16-byte atomic load costs the same as a CAS anyway, but the failure path would have to either free the malloc buffer or spin until the CAS succeeded before leaving the critical section. The latter could actually work if you can avoid wasting too much CPU time and causing contention, e.g. with _mm_pause().

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
  • Many thanks and all your comments are of course accurate. (Many of the points you make are due to a rush job on a MRE, not the production code itself!) – Swiss Frank Jun 05 '20 at 04:18
  • My original code _and_ my suggested fix are saying: _if the situation is still the one CASE 1 couldn't deal with, allocate a new buffer_. Your version says _regardless of whether this is the situation that CASE 1 couldn't deal with, if this is a situation that needs a new buffer, do it,_ In the case where `iRemaining` has rolled around, your version does the needful. My initial version tries to do the needful on the stale pairCur (and fails, then tries CASE 1 and fails, and needs to lock AGAIN). My fix realizes pairCur is stale, but then tries CASE 1 and fails, and needs to lock AGAIN). – Swiss Frank Jun 05 '20 at 04:21
  • 1
    marking this as the answer because of the added efficiency in the (albeit rare) case of iRemaining rolling over, and again putting us in a situation where the CASE 2 code is called for, even if its not the situation that we obtained a lock to handle. – Swiss Frank Jun 05 '20 at 04:24