0

I have two threads, one is the producer and other is consumer. My consumer is always late (due to some costly function call, simulated in below code using sleeps) so I have used ring buffer as I can afford to loose some events.

Questions: I am wondering if it would be better to use condition variable instead of what I currently have : continuous monitoring of the ring buffer size to see if the events got generated. I know that the current while loop of checking the ring buffer size is expensive, so I can probably add some yield calls to reduce the tight loop. I want to reduce the chances of dropped events.

Can I get rid of pointers? In my current code I am passing pointers to my ring buffer from main function to the threads. Wondering if there is any fancy or better way to do the same?

#include <iostream>
#include <thread>
#include <chrono>
#include <vector>
#include <atomic>
#include <boost/circular_buffer.hpp>
#include <condition_variable>
#include <functional>

std::atomic<bool> mRunning;
std::mutex m_mutex;
std::condition_variable m_condVar;
long int data = 0;

class Detacher {
    public:
    template<typename Function, typename ... Args>
    void createTask(Function &&func, Args&& ... args) {
        m_threads.emplace_back(std::forward<Function>(func), std::forward<Args>(args)...);
    }

    Detacher() = default;
    Detacher(const Detacher&) = delete;
    Detacher & operator=(const Detacher&) = delete;
    Detacher(Detacher&&) = default;
    Detacher& operator=(Detacher&&) = default;

    ~Detacher() {
        for (auto& thread : m_threads) {
            thread.join();
        }
    }

    private:
    std::vector<std::thread> m_threads;
};

void foo_1(boost::circular_buffer<int> *cb)
{
    while (mRunning) {
        std::unique_lock<std::mutex> mlock(m_mutex);
        if (!cb->size())
            continue;
        int data = cb[0][0];
        cb->pop_front();
        mlock.unlock();
        if (!mRunning) {
            break;  
        }
        //simulate time consuming function call
        std::this_thread::sleep_for(std::chrono::milliseconds(16));
    }
}

void foo_2(boost::circular_buffer<int> *cb)
{
    while (mRunning) {
        std::unique_lock<std::mutex> mlock(m_mutex);
        cb->push_back(data);
        data++;
        mlock.unlock();
        //simulate time consuming function call
        std::this_thread::sleep_for(std::chrono::milliseconds(1));
    }
}

int main()
{
    mRunning = true;
    boost::circular_buffer<int> cb(100);
    Detacher thread_1;
    thread_1.createTask(foo_1, &cb);
    Detacher thread_2;
    thread_2.createTask(foo_2, &cb);
    std::this_thread::sleep_for(std::chrono::milliseconds(20000));
    mRunning = false;
}
noman pouigt
  • 906
  • 11
  • 25
  • "Better" is a subjective term. Conditional variables will perform better than busy loop (btw you may want to pop more than 1 element from the queue before sleeping). Lock-free will perform better than condionals. But each step has a price: complexity. So it is up to you to decide whether it is worth it. As for "Can I get rid of pointers?" you can use std::ref instead: https://stackoverflow.com/questions/34078208/passing-object-by-reference-to-stdthread-in-c11/34078246 – freakish Jul 29 '19 at 09:45
  • @freakish: but won't the conditional variable increase the chances of dropped frames as if the other thread is busy with expensive function call and then wakeup happened from current thread? In the current case whenever expensive call is done then it can just go and check if the size is not zero or not and then continue popping the data. – noman pouigt Jul 29 '19 at 09:54
  • What do you mean by "dropped frame"? What's a frame? Either way I'm not sure what you mean, the producer wakes up the consumer after push_back. There's no way for a busy sleep loop to outperform that. If you need to synchronize this process with some other then this is a different story and a different question. – freakish Jul 29 '19 at 09:58
  • @freakish May be my understanding is not right. How will the producer wake up the consumer when consumer is stuck in executing a function call and also it is not currently waiting for wake up? – noman pouigt Jul 29 '19 at 10:02
  • Well, you have one consumer so obviously once it finishes the job it will check the condition variable, see that there is more to do and continue. You just have to ensure that the long running process **does not** hold the lock (you need to unlock before starting the long running function, so you lock only for push/pop). – freakish Jul 29 '19 at 10:04
  • So perhaps there is a misunderstanding: waking up works only for sleeping threads. But conditional variable is more then just a lock. It also contains a "condition". If a consumer hits conditional variable then two things can happen: (1) the condition is not met and the thread goes to sleep (and internally releases the lock) and (2) the condition is met, the thread doesn't go to sleep, it continues the work. When there is no sleeping consumer then waking up does nothing. But it doesn't matter, because you still have your condition (like "nonempty queue"). – freakish Jul 29 '19 at 10:28
  • @freakish: I was talking about https://docs.oracle.com/cd/E19455-01/806-5257/sync-30/index.html this. – noman pouigt Jul 29 '19 at 10:37
  • 1
    First of all, focus on the C++ standard. You don't know whether threads are implemented via pthread or not. And it doesn't matter. Because this particular issue (the lost wake-up) is solved with "wait with predicate": https://www.modernescpp.com/index.php/c-core-guidelines-be-aware-of-the-traps-of-condition-variables In other words, this issue is fully avoidable with std::conditional_variable. – freakish Jul 29 '19 at 10:57
  • 1
    I would favor cv.wait_for(.., duration) over sleep. This will 'wake up' after the specified duration (just like sleep(duration), or immediately if the condition variable is notified. This gives big improvements over sleep if notify is called by the producer as soon as data is placed in the queue. – ttemple Jul 29 '19 at 11:36

1 Answers1

0

The producer is faster (16x) than the consumer, so ~93% of all events will always be discarded.