6

I have a lock-free single producer multiple consumer queue implemented using std::atomics in a way similar to Herb Sutters CPPCon2014 talk.

Sometimes, the producer is too slow to feed all consumers, therefore consumers can starve. I want to prevent starved consumers to bang on the queue, therefore I added a sleep for 10ms. This value is arbitrary and not optimal. I would like to use a signal that the consumer can send to the producer once there is a free slot in the queue again. In a lock based implementation, I would naturally use std::condition_variable for this task. However now in my lock-free implementation I am not sure, if it is the right design choice to introduce a mutex, only to be able to use std::condition_variable.

I just want to ask you, if a mutex is the right way to go in this case?

Edit: I have a single producer, which is never sleeping. And there are multiple consumer, who go to sleep if they starve. Thus the whole system is always making progress, therefore I think it is lock-free. My current solution is to do this in the consumers GetData Function:
std::unique_lock<std::mutex> lk(_idleMutex); _readSetAvailableCV.wait(lk);

And this in the producer Thread once new data is ready:
_readSetAvailableCV.notify_all();

bodzcount
  • 95
  • 1
  • 8
  • 1
    A sleeping thread with a signal is not lock free, pretty much by definition. Do you understand that lock free is mainly about certain guarantees aboit thread scheduling and progress, and not aboit "faster", right? What of the lock free guarantees do you need, exactly? – Yakk - Adam Nevraumont Oct 04 '15 at 16:53
  • I don’t think I understand. If it’s the consumers who are starving, why would they notify the producer when there’s a free slot in the queue? – Davislor Oct 04 '15 at 16:56
  • But if you do need a variable with multiple writers that means at least one writer is ready to consume an update, that sounds like an atomic flag. – Davislor Oct 04 '15 at 17:00
  • @Lorehead The OP wants a lock-free queue ... but with locks that block idle readers from spinning. And wants to know how to do it without locks. – Yakk - Adam Nevraumont Oct 05 '15 at 02:19
  • Without understanding why you're using a lock-free queue with atomics, it's hard to give you a useful answer about what the right design choice is. It just sounds like you made a bad choice and should have been using locks from the beginning. What advantage of a lock-free design justifies the extra contention in your use case? – David Schwartz Oct 05 '15 at 07:18
  • @david: for this usecase I dont see much advantage of atomics over mutexes, except its a bit less code a good practice for lock-free design. Why should I use mutexes in this case? Is it possible to have a lock-free design that uses mutexes for concurrency management? I am a little bit confused about the definition. – bodzcount Oct 05 '15 at 19:27
  • 2
    @BenjaminMenkuec You should use mutexes because mutexes minimize contention by descheduling contending threads. Atomics and lock-free algorithms do nothing to avoid contention penalties. – David Schwartz Oct 05 '15 at 20:48
  • There are definitely use cases for this. For example, in real-time programming, we may want to have a producer to shuttle information from a real-time thread to a lower priority thread. For example, using a large buffer to avoid dropping data on the low priority thread. We want the real-time thread to interact with the buffer in not only a lock free way, but wait-free way, but the consumer(s) to sleep until there is data there and not bang on the queue. This would be a very common use case for service routines and such. – Peter M Feb 07 '19 at 17:27

4 Answers4

4

If most of your threads are just waiting for the producer to enqueue a resource, I'm not that sure a lock-free implementation is even worth the effort. most of the time, your threads will sleep, they won't fight each other for the queue lock.

That is why I think (from the amount of data you have supplied), changing everything to work with a mutex + conditional_variable is just fine. When the producer enqueues a resource it notifies just one thread (with notify_one()) and releases the queue lock. The consumer that locks the queue dequeues a resource and returns to sleep if the queue is empty again. There shouldn't be any real "friction" between the threads (if your producer is slow) so I'd go with that.

Richard Walters
  • 1,464
  • 8
  • 16
David Haim
  • 25,446
  • 3
  • 44
  • 78
  • I agree with you. Before I had it implemented with mutexes, and there was not much friction between the threads. However I wanted to convert it to a lock-free design, just for practice :) I measured the speed, its about the same as before. PS: depending on the system and speed of hard drives, sometimes the consumer are faster and sometimes the producer – bodzcount Oct 05 '15 at 19:14
  • 3
    Making code worse doesn't seem like useful practice. – David Schwartz Oct 13 '15 at 22:46
2

I just watched this CPPCON video about the concurrency TS: Artur Laksberg @cppcon2015

Somewhere in the middle of this talk Artur explains how exactly my problem could be solved with barriers and latches. He also shows an existing workaround using a condition_variable in the way i did. He underlines some weakpoints about the condition_variable used for this purpose, like spurious wake ups and missing notify signals before you enter wait. However in my application, these limitations are no problem, so that I think for now, I will use the solution that I mentioned in the edit of my post - until latches/barrierers are available. Thanks everybody for commenting.

bodzcount
  • 95
  • 1
  • 8
1

With minimal design change to what you have, you can simply use a semaphore. The semaphore begins empty and is upped every time the produces pushes to the queue. Consumers first try to down the semaphore before popping from the queue.

C++11 does not provide a semaphore implementation, although one can be emulated with a mutex, a condition variable, and a counter.

If you really want lock-free behavior when the producer is faster than the consumers, you could use double checked locking.

/* producer */
bool was_empty = q.empty_lock_free();
q.push_lock_free(x);
if (was_empty) {
    scoped_lock l(q.lock());
    if (!q.empty()) {
        q.cond().signal();
    }
}

/* consumers */
for (;;) {
    if (q.empty_lock_free()) {
        scoped_lock l(q.lock());
        while (q.empty()) {
            q.cond().wait();
        }
        x = q.pop();
        if (!q.empty()) {
            q.cond().signal();
        }
    } else {
        try {
            x = q.pop_lock_free();
        } catch (empty_exception) {
            continue;
        }
        break;
    }
}
Community
  • 1
  • 1
jxh
  • 69,070
  • 8
  • 110
  • 193
0

One possibility with pthreads is that a starved thread sleeps with pause() and wakes up with SIGCONT. Each thread has its own awake flag. If any thread is asleep when the producer posts new input, wake one up with pthread_kill().

Davislor
  • 14,674
  • 2
  • 34
  • 49