1

A similar problem is this one: Are threads waiting on a lock FIFO? However, in this problem, once the lock is acquired only one thread executes the protected code, and in the end all threads will have executed the code.

What I would like to do is to execute the protected code once, but for all threads queuing for the method call at that moment, return true.

Basically, the protected code is a global checkpoint, which is relevant for all threads waiting at that moment. I.e., doing N consecutive checkpoints would not achieve more than only 1.

Note that while the checkpointing is done, there will be other calls to the method, which themselves need a new checkpoint call.

I believe what I want to do is "batch-wise" synchronized calls to the global function.

How can I achieve this in C++, perhaps with Boost?

Community
  • 1
  • 1
fatyun
  • 13
  • 2

3 Answers3

1

You seem to be looking for try_lock().

Given some Boost.Thread Lockable, a call to Lockable::try_lock() will return true if it can acquire the lock at that moment, otherwise false if it cannot acquire the lock.

When your thread reaches a checkpoint, have it try to acquire this lock. If it fails, another thread is already in the function. If it succeeds, check some bool to see if the checkpoint has already been run. If it has been run, release the lock and continue. If it hasn't been run, keep the lock and run the checkpoint function and set the checkpoint bool to true.

Collin Dauphinee
  • 13,664
  • 1
  • 40
  • 71
  • Say try_lock() returns false for some thread. This means this thread needs to wait for the lock, and regardless if the bool is true or false, I can't be sure that the checkpoint was run for the state of this thread? – fatyun Mar 29 '11 at 15:30
  • If `try_lock()` returns false for some thread, it means that the lock is already owned by another thread. Assuming you have one lock for every check point, you can safely assume that another thread is either currently executing the checkpoint code or has already executed the checkpoint code. Am I misunderstanding what you want? – Collin Dauphinee Mar 29 '11 at 16:43
1

What you seem to want looks like a barrier which is provided by boost. However, if that doesn't help you, you can make something with condition variables, also in boost

stefaanv
  • 14,072
  • 2
  • 31
  • 53
  • Very interesting. If for all N threads only 1 call to the global function is enough, and given this: "wait() Returns: Exactly one of the N threads will receive a return value of true, the others will receive a value of false." I should be able to combine it with a global variable, that is set by the thread that got True as a response, and that each False-thread waits for? – fatyun Mar 29 '11 at 15:00
  • Also, this seems to assume a fixed N. How about if N is unknown, and could even be very low, say 1 or 2? – fatyun Mar 29 '11 at 15:23
  • After some thoughts, I guess that barrier is not what you want. Either, dauphics approach is okay and you return as long as the lock is taken or you need an extra timer to delay and group your accesses and you group with a conditional variable – stefaanv Mar 29 '11 at 15:27
  • Do you mean a watchdog timer over the barrier->wait() -- either the barrier fills up or the timer triggers the function call? – fatyun Mar 29 '11 at 15:49
  • No, I mean a grouping timer to make your batch, so after the delay, maybe 5 threads will wait. Then you can notify 1 to perform and then notify the others (broadcast) to return. Did you already have a look at boosts condition variables? – stefaanv Mar 29 '11 at 15:59
  • Yes, and it seems like it should be doable with condition variables. However, how do I notify 1 AND also all others? On one variable you can do either or, and there is no support for waiting on multiple condition variables... – fatyun Mar 29 '11 at 16:26
0

Here is pseudo-code for how I would do it. I am assuming the existing of a mutex class with lock() and unlock() operations.

// This forward declaration helps with declaration
// of the "friend" status for the nested class.
class DoItOnce;

class DoItOnce
{
private:
    bool    m_amFirst;
    mutex   m_mutex;
    friend class ::DoItOnce::Op;    
public:
    DoItOnce()
    {
        m_amFirst = true;
        init(m_mutex);
    }
    ~DoItOnce() { destroy(m_mutex); }
    void reset()
    {
        m_mutex.lock();
        m_amFirst = true;
        m_mutex.lock();
    }

    //--------
    // Nested class
    //--------
    class Op {
    public:
        Op(DoItOnce & sync)
            : m_sync(sync)
        {
            m_sync.m_mutex.lock();
            m_amFirst = m_sync.m_amFirst;
            m_sync.m_amFirst = false;
        }
        ~Op() { m_sync.m_mutex.unlock(); }
        bool amFirst() { return m_amFirst; }
    private:
        DoItOnce &  m_sync;
        bool        m_amFirst;
    }; // end of nested class
}; // end of outer class

Here is an example to illustrate its intended use. You will implement the doWork() operation and have all your threads invoke it.

class WorkToBeDoneOnce
{
private:
    DoItOnce    m_sync;    
public:
    bool doWork()
    {
        DoItOnce::Op    scopedLock(m_sync);

        if (!scopedLock.amFirst()) {
            // The work has already been done.
            return true;
        }
        ... // Do the work
        return true;
    }
    void resetAmFirstFlag()
    {
        m_sync.reset();
    }
}

If you are confused by my use of the DoItOnce::Op nested class, then you can find an explanation of this coding idiom in my Generic Synchronisation Policies paper, which is available here in various formats (HTML, PDF and slides).

Ciaran McHale
  • 2,126
  • 14
  • 21