3

Is there a mechanism to have a conditional variable use multiple mutexes? I am in Linux and pthreads in C++.

In an application, I need two mutexes (instead of one) to be atomically acquired and released by pthread_cond_wait(), but the function only accepts one.

I have a class named BlockingManager and it has the method:

blockMeFor( pthread_cond_t* my_cond, pthread_mutex_t* my_lock, set<int> waitees);

and I am using it, assuming that it acquires/releases the mutex just like pthread_cond_wait.

The problem is that for implementing blockingManager I need an internal mutex too and both of these mutexes should be acquired and released atomically.

There is a somehow related discussion here, it says waiting for more than one mutex yields undefined behavior. http://sourceware.org/ml/libc-help/2011-04/msg00011.html

A producer/consumer model for the problem I am facing follows:

We have multiple clients. Each client has some tasks. Each task probably has multiple prerequisites (either among the tasks of the same client or other clients). Each client has one consumer thread. The tasks are assigned to the clients from one producer thread. The newly assigned task may be eligible to be done before the previous tasks. There may be no task to be done at some moments, but if there is a task to be done, at least one should be done. (It should be work-conserving)

I am using one condvar per consumer thread and it would block once there is no task to be done for that thread. The condvar may be signaled by either

  • The producer thread assigning a new task.

  • Another consumer thread finishing a task.

I am using one mutex per consumer to protect the shared data-structures between producer & consumer. And one mutex (internal mutex) to protect the shared data-structures between multiple-consumers.

Shayan Pooya
  • 1,049
  • 1
  • 13
  • 22
  • I wouldn't try to do that... tell us what you need to do and let's try to find a standard solution. – Tomas Aug 12 '11 at 19:56
  • @Tomas I added a description of the problem. Thanks for your reply – Shayan Pooya Aug 12 '11 at 20:18
  • NO you can't do that. Please explain what you are actually trying to achieve (bigger picture) not how you are trying to implement it. From your description it sounds like you are implementing a work que backwards. Put the condition variable/mutex as part of the que. All the threads will sleep on the same condition variable and will be woken up when a work item is placed in the queue. – Martin York Aug 12 '11 at 20:19
  • @Martin I know the standard does not let me do it. But (IMO) it makes sense to have such a thing. I thought maybe I can model it somehow. I also edited the post and added the "bigger picture". – Shayan Pooya Aug 12 '11 at 20:22
  • @Shayan: No it does not make sense to have such a thing. You have a very contrived corner case. Your description is still what you are trying to implement not what you are trying to do. Explain what you are trying to do and we may be able to help you. – Martin York Aug 12 '11 at 20:24
  • It does it matter which client does which job? Or does it just matter that each job is only processed once all the dependencies are complete. – Martin York Aug 12 '11 at 20:26
  • @Martin A task is assigned to a consumer by the producer. It cannot be done by other consumers. – Shayan Pooya Aug 12 '11 at 20:31

1 Answers1

8

In C++11 (if your compiler supports it) you can use std::lock to lock two mutexes at once (without deadlock). You can use this to build a Lock2 class which references two mutexes. And then you can use std::condition_variable_any to wait on a Lock2. This all might look something like:

#include <mutex>
#include <condition_variable>

std::mutex m1;
std::mutex m2;
std::condition_variable_any cv;

class Lock2
{
    std::mutex& m1_;
    std::mutex& m2_;

public:
    Lock2(std::mutex& m1, std::mutex& m2)
        : m1_(m1), m2_(m2)
    {
        lock();
    }

    ~Lock2() {unlock();}

    Lock2(const Lock2&) = delete;
    Lock2& operator=(const Lock2&) = delete;

    void lock() {std::lock(m1_, m2_);}
    void unlock() {m1_.unlock(); m2_.unlock();}
};

bool not_ready() {return false;}

void test()
{
    Lock2 lk(m1, m2);
    // m1 and m2 locked
    while (not_ready())
        cv.wait(lk);  // m1 and m2 unlocked
    // m1 and m2 locked
}  // m1 and m2 unlocked

If your compiler does not yet support these tools, you can find them in boost.

Howard Hinnant
  • 206,506
  • 52
  • 449
  • 577
  • Thanks for the answer. How does the cv.wait(lk) unlock m1 and m2? does it call the unlock function of the Lock2 class? Is it still atomic? Can we wait on a lock2 object like that? – Shayan Pooya Aug 12 '11 at 20:45
  • Correct. It calls `lk.unlock()`. `Lock2::lock()` is atomic thanks to `std::lock(m1, m2)`. `std::condition_variable_any` will atomically wait and unlock the `Lock2` on entry. When it receives a signal, it will atomically wake and lock the `Lock2`. – Howard Hinnant Aug 12 '11 at 20:53
  • OK. So all the magic happens in the wait function. We do not need std::lock(m1,m2) if the whole function is atomic. The same way we are unlocking the mutexes one by one. – Shayan Pooya Aug 12 '11 at 21:00
  • Without `std::lock(m1, m2)` you are at risk of deadlock. If one thread locks them in one order, and another thread in another order, you're hosed. `std::lock` fixes that. – Howard Hinnant Aug 12 '11 at 21:03
  • I should clarify "atomic". It is an exaggeration in the way I used it. The `condition_variable_any::wait()` function will be atomic with respect to any other thread that calls `condition_variable_any::wait()`, `condition_variable_any::notify_one()`, or `condition_variable_any::notify_all()`. `std::lock(m1, m2)` will be atomic with respect to any thread that calls `m1.lock()` or `m2.lock()`. These are not truly *atomic* operations. The `condition_variable_any` functions are protected by an internal mutex. The `std::lock` function *should* be implemented as a try-and-back-off algorithm. – Howard Hinnant Aug 12 '11 at 23:34