2

I want to use a simple thread-safe std::queue in my program that has multiple threads accessing the same queue. The first thing that came to my mind is protecting the queue operation using mutex as below:

/*Enqueue*/
mutex.lock();
queue.push();
mutex.unlock();

/*Dequeue*/
mutex.lock();
val = queue.front
mutex.unlock();
/*some operation*/
mutex.pop();

I've seen many robust implementations using condition variables for thread-safe queue implementation For e.g. https://stackoverflow.com/a/16075550/3598205 . Will there be a significant difference in the performance if I only have two threads accessing the same queue?

sa_penguin
  • 459
  • 5
  • 15
  • 1
    it all depends on the access pattern. you need to profile if you are concerned with performance. – bolov Jun 17 '20 at 17:41

2 Answers2

2

Mutexes and condition variables do two different things, although they are often used together.

To ensure that only one thread can access a resource at a time, use a mutex. The code you posted shows an example of this.

To block a worker thread until there is something for it to do, have it wait on a condition variable (which is then signalled by another thread providing some kind of work item). There is an example of this over at cppreference.

Your first thought when writing multi-threaded code should be to write robust, safe code. It's very easy to make mistakes, especially if you're new to the area, and bugs are very hard to diagnose since they lead to sporadic, unpredictable errors. Worry about performance later.

Paul Sanders
  • 24,133
  • 4
  • 26
  • 48
  • OP seems to hesitate between a safe simple mutex'ed queue and a fancy lock-free struct where no lock is needed most of the time. The later is more advanced but harder to maintain, debug and test. I'd say they need to check their actual needs. – Jeffrey Jun 17 '20 at 17:53
2

Your application will be limited by one main thing. We call this the bottleneck. In my experience, 90% of the applications I've seen were limited by the bus bandwidth, transferring memory to/from main memory/CPU.

Different apps will have different bottleneck. It could be GPU performance, or disk access. It could be raw CPU power or maybe, contention accessing the queue above.

Very likely, the mutex will be just as good as the fancy lock-free queue. But you don't know until you profile.

It might very well happen that your application is strictly limited by the access to this queue. For example, if your app is a low-latency market data exchange for a financial institution and the queue is holding the buy/sell directives, then it will make a difference. The two threads could be constantly writing to different locations on a queue that has a couple hundred items (so, on different memory pages).

Or it might be that your application is always waiting on the GPU to render frames and the queue holds the player weapon changes that rendering and gameplay threads access just a couple times per frame.

Profile and check.

Jeffrey
  • 11,063
  • 1
  • 21
  • 42
  • If you look at the question you linked, notice that @quantdev (quantitative finance developer?) suggests exactly a lock-free queue from Boost. But apart from niche cases like that, the mutex will do. – Jeffrey Jun 17 '20 at 18:00