17

Is there a way to ensure that blocked threads get woken up in the same order as they got blocked? I read somewhere that this would be called a "strong lock" but I found no resources on that.

On Mac OS X one can design a FIFO queue that stores all the thread ids of the blocked threads and then use the nifty function pthread_cond_signal_thread_np() to wake up one specific thread - which is obviously non-standard and non-portable.

One way I can think of is to use a similar queue and at the unlock() point send a broadcast() to all threads and have them check which one is the next in line.
But this would induce lots of overhead.

A way around the problem would be to issue packaged_task's to the queue and have it process them in order. But that seems more like a workaround to me than a solution.

Edit:
As pointed out by the comments, this question may sound irrelevant, since there is in principle no guaranteed ordering of locking attempts.
As a clarification:

I have something I call a ConditionLockQueue which is very similar to the NSConditionLock class in the Cocoa library, but it maintains a FIFO queue of blocked threads instead of a more-or-less random pool.

Essentially any thread can "line up" (with or without the requirement of a specific 'condition' - a simple integer value - to be met). The thread is then placed on the queue and blocks until it is the frontmost element in the queue whose condition is met.

This provides a very flexible way of synchronization and I have found it very helpful in my program.
Now what I really would need is a way to wake up a specific thread with a specific id.
But these problems are almost alike.

iolo
  • 1,090
  • 1
  • 9
  • 20
  • 1
    please explain why do you need to preserve order... and how can you guarantee that threads will arrive to the lock in the order you want them to arrive. – NoSenseEtAl Feb 09 '13 at 21:54
  • Well, first its interesting! Then it just fitted well into my program ;) I could design it to be more asynchronous, but if there was another way it would be simpler. – iolo Feb 09 '13 at 21:58
  • my point is that even if you get this solution it is meaningless since you need to guarantee that threads arrive at the lock in some order and without previous synchronization you cant do that. I could be wrong so that is why I asked for explanation of your design. – NoSenseEtAl Feb 09 '13 at 22:02
  • Well, if you REALLY want to do that, I think the best thing is to have a "who's next" queue built into the locking mechanism. But I still think it seems like a problem looking for a new problem, so now you have TWO problems... ;) – Mats Petersson Feb 09 '13 at 22:03
  • @NoSenseEtAl Nah, that's not really true! If I issue commands with a second in between them, the probability is quite high that they are processed in order. – iolo Feb 09 '13 at 22:04
  • true, Idk you have long delays :) – NoSenseEtAl Feb 09 '13 at 22:11
  • you can just have 2mtxs, one for the real work and one for the deque where you have FIFO thread IDs, if thread is not the next to execute it puts itself to sleep wiht std::yield() and tries to get mutex again... it looks ugly tbh, having while loop and 2 mtxs... :D – NoSenseEtAl Feb 09 '13 at 22:20
  • Yeah, and it potentially needs to wake all threads and put them to sleep again. Lots of overhead... – iolo Feb 09 '13 at 22:22
  • 1
    So at some point the threads have to be synchronized again and executed 'single-threaded'? Why not at the point where you would now lock, put `std::function`s (or similar) on a FIFO queue and just process that single-threaded and meanwhile wait in the thread until that function is completed? It would give the same effect and you need only locking for the FIFO queue and maybe some condition/wake-up/future mechanism to continue the thread. – KillianDS Feb 09 '13 at 22:31
  • You should check Herb Sutter's video about C++ Concurrency and Blocking vs. non-blocking threads. It might be interesting for what you are trying to do. Here is the link : http://channel9.msdn.com/Shows/Going+Deep/C-and-Beyond-2012-Herb-Sutter-Concurrency-and-Parallelism. Point is : you should avoid using blocking thread as much as possible. – Cyril Leroux Feb 11 '13 at 12:16
  • Same question, but using pthread: [c++ - pthreads: thread starvation caused by quick re-locking - Stack Overflow](https://stackoverflow.com/questions/12685112/pthreads-thread-starvation-caused-by-quick-re-locking). – user202729 Oct 04 '19 at 04:23

3 Answers3

18

Its pretty easy to build a lock object that uses numbered tickets to insure that its completely fair (lock is granted in the order threads first tried to acquire it):

#include <mutex>
#include <condition_variable>

class ordered_lock {
    std::condition_variable  cvar;
    std::mutex               cvar_lock;
    unsigned int             next_ticket, counter;
public:
    ordered_lock() : next_ticket(0), counter(0) {}
    void lock() {
        std::unique_lock<std::mutex> acquire(cvar_lock);
        unsigned int ticket = next_ticket++;
        while (ticket != counter)
            cvar.wait(acquire);
    }
    void unlock() {
        std::unique_lock<std::mutex> acquire(cvar_lock);
        counter++;
        cvar.notify_all();
    }
};

edit

To fix Olaf's suggestion:

#include <mutex>
#include <condition_variable>
#include <queue>

class ordered_lock {
    std::queue<std::condition_variable *> cvar;
    std::mutex                            cvar_lock;
    bool                                  locked;
public:
    ordered_lock() : locked(false) {};
    void lock() {
        std::unique_lock<std::mutex> acquire(cvar_lock);
        if (locked) {
            std::condition_variable signal;
            cvar.emplace(&signal);
            signal.wait(acquire);
        } else {
            locked = true;
        }
    }
    void unlock() {
        std::unique_lock<std::mutex> acquire(cvar_lock);
        if (cvar.empty()) {
            locked = false;
        } else {
            cvar.front()->notify_one();
            cvar.pop();
        }
    }
};
Chris Dodd
  • 119,907
  • 13
  • 134
  • 226
  • Yes that is a possibility. But as mentioned in my post, the induced overhead is too great. Imagine you have lots of threads. They would all contend for the mutex and before you have the correct thread an estimated 50 threads are going to wake up and put themselves to sleep again. That is not efficient. – iolo Feb 10 '13 at 10:24
  • 1
    @lolo Then use a queue of one condition variable per thread instead. – Olaf Dietsche Feb 10 '13 at 12:12
  • 2
    @Quirliom: With newer versions of C++, you need `emplace_back` rather than `push_back` to avoid the move/copy, but older versions don't have `emplace_back` – Chris Dodd Jul 19 '15 at 20:55
  • 8
    The second version doesn't work in case of spurious wakeups (the wait might sometimes just continue). Also, I wonder if there is a problem when cvar.pop() is called before the other waiting thread continues. Then the condition variable is destroyed while the other threads still waits on that variable to fire... – Jan Rüegg Mar 16 '16 at 08:23
  • 2
    To avoid spurious wake up and destroyed condition variable just: 1. Move `cvar.pop()` to the `lock` function. 2. It's no longer possible to use a counter variable, but it's still possible to store a pointer to the most recently woken up condition variable and have the "lock" function compare with that. – user202729 Oct 07 '19 at 07:25
  • the 2nd solution seems that it's not scalable to more than 3 threads, because from 3rd thread trying to use ordered::lock() it would wait for unlocking of 'cvar_lock' in the following line "std::unique_lock acquire(cvar_lock)" and the order of arriving would not be a guarantee and a FIFO would not be maintained. maybe it's better to set 'locked' as atmoic_bool or atomic_flag and get rid of the mutex – Or Hirshfeld May 06 '20 at 15:26
  • 1
    @OrHirshfeld: cvar_lock is released by the `wait` call, so this is scalable to any number of threads, and they'll acquire the ordered lock in FIFO order. Yes, the thread getting the ordered lock needs to (briefly) reaquire cvar_lock when it is signalled, but that should not be a bottleneck. – Chris Dodd May 06 '20 at 17:09
  • @ChrisDodd: thanks for the hint, now found this functionality you described in cpp reference of condition_variable, "Atomically unlocks lock, blocks the current executing thread", https://en.cppreference.com/w/cpp/thread/condition_variable/wait – Or Hirshfeld May 07 '20 at 06:33
  • I tried the above code and when try to compile it returned many errors including: " /usr/include/c++/6/bits/alloc_traits.h:392: error: forming pointer to reference type ‘std::condition_variable&’ using pointer = _Tp*;" I rad that using reference in queue is not possible because it's not copyable https://stackoverflow.com/questions/10455550/queuestring-errors So i try using reference_wrapper and it worked well – Or Hirshfeld May 07 '20 at 09:39
  • Not sure this is necessarily clearer, but as I was trying to understand conditional variables, I noticed that `while (ticket != counter) cvar.wait(acquire);` is equivalent to `var.wait(acquire, [&]{return ticket == counter;});`. Now you can enjoy learning about lambda expressions ;) – Aaron Swan Jul 01 '21 at 16:54
0

I tried Chris Dodd solution https://stackoverflow.com/a/14792685/4834897

but the compiler returned errors because queues allows only standard containers that are capable. while references (&) are not copyable as you can see in the following answer by Akira Takahashi : https://stackoverflow.com/a/10475855/4834897

so I corrected the solution using reference_wrapper which allows copyable references.

EDIT: @Parvez Shaikh suggested small alteration to make the code more readable by moving cvar.pop() after signal.wait() in lock() function

#include <mutex>
#include <condition_variable>
#include <queue>
#include <atomic>
#include <vector>

#include <functional> // std::reference_wrapper, std::ref

using namespace std;

class ordered_lock {
    queue<reference_wrapper<condition_variable>> cvar;
    mutex                                        cvar_lock;
    bool                                         locked;
public:
    ordered_lock() : locked(false) {}
    void lock() {
        unique_lock<mutex> acquire(cvar_lock);
        if (locked) {
            condition_variable signal;
            cvar.emplace(std::ref(signal));
            signal.wait(acquire);
            cvar.pop();
        } else {
            locked = true;
        }
    }
    void unlock() {
        unique_lock<mutex> acquire(cvar_lock);
        if (cvar.empty()) {
            locked = false;
        } else {
            cvar.front().get().notify_one();
        }
    }
};

Another option is to use pointers instead of references, but it seems less safe.

Or Hirshfeld
  • 106
  • 2
  • 11
-1

Are we asking the right questions on this thread??? And if so: are they answered correctly???

Or put another way:

Have I completely misunderstood stuff here??

Edit Paragraph: It seems StatementOnOrder (see below) is false. See link1 (C++ threads etc. under Linux are ofen based on pthreads), and link2 (mentions current scheduling policy as the determining factor) -- Thanks to Cubbi from cppreference (ref). See also link, link, link, link. If the statement is false, then the method of pulling an atomic (!) ticket, as shown in the code below, is probably to be preferred!!

Here goes...

StatementOnOrder: "Multiple threads that run into a locked mutex and thus "go to sleep" in a particular order, will afterwards aquire ownership of the mutex and continue on in the same order."

Question: Is StatementOnOrder true or false ???

void myfunction() {
    std::lock_guard<std::mutex> lock(mut);

    // do something
    // ...
    // mutex automatically unlocked when leaving funtion.
}

I'm asking this because all code examples on this page to date, seem to be either:

a) a waste (if StatementOnOrder is true)

or

b) seriously wrong (if StatementOnOrder is false).

So why do a say that they might be "seriously wrong", if StatementOnOrder is false?
The reason is that all code examples think they're being super-smart by utilizing std::condition_variable, but are actually using locks before that, which will (if StatementOnOrder is false) mess up the order!!!
Just search this page for std::unique_lock<std::mutex>, to see the irony.

So if StatementOnOrder is really false, you cannot run into a lock, and then handle tickets and condition_variables stuff after that. Instead, you'll have to do something like this: pull an atomic ticket before running into any lock!!!
Why pull a ticket, before running into a lock? Because here we're assuming StatementOnOrder to be false, so any ordering has to be done before the "evil" lock.

#include <mutex>
#include <thread>
#include <limits>
#include <atomic>
#include <cassert>
#include <condition_variable>
#include <map>

std::mutex mut;
std::atomic<unsigned> num_atomic{std::numeric_limits<decltype(num_atomic.load())>::max()};
unsigned num_next{0};
std::map<unsigned, std::condition_variable> mapp;

void function() {
    unsigned next = ++num_atomic; // pull an atomic ticket

    decltype(mapp)::iterator it;

    std::unique_lock<std::mutex> lock(mut);
    if (next != num_next) {
        auto it = mapp.emplace(std::piecewise_construct,
                               std::forward_as_tuple(next),
                               std::forward_as_tuple()).first;
        it->second.wait(lock);
        mapp.erase(it);
    }



    // THE FUNCTION'S INTENDED WORK IS NOW DONE
    // ...
    // ...
    // THE FUNCTION'S INDENDED WORK IS NOW FINISHED

    ++num_next;

    it = mapp.find(num_next); // this is not necessarily mapp.begin(), since wrap_around occurs on the unsigned                                                                          
    if (it != mapp.end()) {
        lock.unlock();
        it->second.notify_one();
    }
}

The above function guarantees that the order is executed according to the atomic ticket that is pulled. (Edit: using boost's intrusive map, an keeping condition_variable on the stack (as a local variable), would be a nice optimization, which can be used here, to reduce free-store usage!)

But the main question is: Is StatementOnOrder true or false???
(If it is true, then my code example above is a also waste, and we can just use a mutex and be done with it.)
I wish somebody like Anthony Williams would check out this page... ;)

Community
  • 1
  • 1
ajneu
  • 432
  • 3
  • 8
  • Why are there so many questions in your answer? Right now I cannot really identify wether or not it actually *is* an answer. Please properly format your answer and remove the empty dot-lines and the massive amount of questions. **Or** simply ask a question instead. – luk2302 Jun 12 '16 at 17:01
  • There is just one predicate-based (i.e. true/false) question in my "text". Depending on whether the answer to that predicate-question is true or false; the other answers on this page to date, are highly questionable. I'm sorry if the text comes across as spam (that's certainly not the intent), but it's a complicated subject matter; and I'm not willing to compromise, but intend to get this answered correctly. – ajneu Jun 12 '16 at 17:10
  • Given that the OS can suspend your thread indefinitely at any time, you really don't need to worry about which thread acquires a mutex first when multiple threads are competing for a mutex. The OS will choose the one that is "best", and most OSes will also try and ensure that threads aren't starved for too long. – Anthony Williams Jun 13 '16 at 13:03
  • Thanks for chiming in. So if I read your comment correctly... the end of the story would be: if you need to ensure order, ensure order yourself. Order is NOT ensured by when threads call anything. (Ensuring that sleeping threads continue in the same order that the run into a mutex, is NOT at all a good way of handling things). – ajneu Jun 13 '16 at 15:49
  • Ensuring that sleeping threads continue in the same order that they run into a mutex (which is often arbitary anyway!!! [from a user's view]), is NOT at all a good way of handling things. INSTEAD use your own locks in the calling code, or "serialize" calls that require order, by doing these calls sequentially in a single thread. – ajneu Jun 13 '16 at 15:56
  • StatementOnOrder is false. The examples here ensure ordering by using the tickets as a mutex, and the mutex in the ticket granter is only held to get the ticket. Once a thread gets a ticket, it releases the ticket granter mutux, allowing the next thread that wants to wait to get the next ticket. A thread never locks the ticket-granter mutex while it holds the logical (overall) mutex we're implementing. You're confusing the two mutexes. – Chris Dodd Oct 07 '19 at 17:12