1

I am planning to use boost::lockfree::queue for my multi threaded application. A boost example illustrates lockfree queue consumption like this:

boost::atomic<bool> done (false);
void consumer(void)
{
    int value;
    while (!done) {
        while (queue.pop(value))
            ++consumer_count;
    }

    while (queue.pop(value))
        ++consumer_count;
}

my question is this part:

    while (!done) {
    //do something
    }

I usually used to use condition variable for such cases but the simplicity of the above code snippet is far more tempting than going through the complexity of condition variables.

Although the consumer will have its own thread(s), it loops almost for the entire duration of program. I worry more because there are many times that the //do something part is not invoked(queue is empty) and a lot of CPU time, which can be given to other threads, is wasted by this thread. Am I right? Is THIS a common practice?

I need someone to tell me I am wrong and I shouldn't worry about this for so&so reasons. or suggest me a better approach.

thanks

Community
  • 1
  • 1
rahman
  • 4,820
  • 16
  • 52
  • 86
  • 2
    The code isn't production code. It's meant for benchmarking. – Kerrek SB Aug 04 '14 at 09:30
  • @KerrekSB so you dont suggest using an infinite loop for such cases? – rahman Aug 04 '14 at 09:35
  • @rahman It's not the loop which causes the problem that you are expecting. In fact, a loop is often necessary to ensure correctness when using a condition variable as well. Looping *without waiting on a lock* is what makes busy waiting wasteful. Also, it's not really *infinite* since it ends when `done` is true. – eerorika Aug 04 '14 at 09:50
  • @rahman: What does "such cases" mean? If you want to benchmark your data structure, then the code is appropriate. – Kerrek SB Aug 04 '14 at 10:49
  • @KerrekSB No I dont mean to benchmark. I am really going to use a lockfree feature in my application. the latency between production and consupmtio is not critical. I just didnt want to lock the threads when sending data to this queue. you may also have a look at my comment below the accepted answer and give me your opinion. thanks – rahman Aug 04 '14 at 15:37
  • @rahman: I think the answer, and your question, are both missing the point. Lock-free programming solves a very different problem from sleeping and blocking. The former is about synchronization, the latter is about scheduling. Spin locks are about synchronization, too, and not about scheduling. If you need to block, then use a condition variable or semaphore; it's the right tool. – Kerrek SB Aug 05 '14 at 11:41

2 Answers2

2

If busy waiting is more or less efficient than blocking depends on how long you are going to wait on average. Some loop iterations may be faster than a context switch.

The point in using a lock-free queue is, that it is lock-free. If you want to block, you better use a condition variable, as you suggested, with another queue.

TNA
  • 2,595
  • 1
  • 14
  • 19
2

It is a very common practice for latency sensitive applications, i.e. applications for which the time spent for waking up a thread is not acceptable.

Yes, in that case (it is called "spinning"), CPU time is wasted to check the boolean value. Spinlocks are implemented in a similar fashion, making them preferable in scenario where busy waiting is preferred.

When the latency of the the producer-to-consumer path is not critical, you should prefer condition variables (or even explicit sleeping) to share the CPU with other thread/processes. And anyway, when latency is critical, you rarely want a lock-free container (that usually exposes a significant overhead to avoid locking)

quantdev
  • 23,517
  • 5
  • 55
  • 88
  • 1
    -1 - it's not "very common" to let a thread spin most of the time - it's very exceptional - it says this thread is so important it can hog a CPU core regardless of where the program's run, which is usually only appropriate if the hardware's dedicated to that application. In the example, it's done for benchmarking as Kerrek observed. The queue interface in question is better used in combination with periodic polling, not seen as encouraging spinning. "when latency is critical, you rarely want a lock-free structure" - not true at all... many atomic operations are lock free and desirable. – Tony Delroy Aug 04 '14 at 11:01
  • If you read the sentence, I explicit says "for **latency sensitive** applications", its not a general statement. And I maintain it is very common for such cases (server applications that btw run on dedicated cores). As for the second statement, I meant "containers" (I edit). Lock-free queue is rarely desirable when latency is critical... – quantdev Aug 04 '14 at 11:24
  • @TonyD after reading quantdev's answer, since `latency of the the producer-to-consumer path is not critical...` I resorted to the solution suggested by both of you.If I understood correctly, both the statements are almost same: `used in combination with periodic polling` and `explicit sleeping`. So I used : boost::this_thread::sleep(boost::posix_time::seconds(0.5)) in the outer loop. do you guys think I am on the right path? – rahman Aug 04 '14 at 12:50
  • 1
    @rahman: not really the right path... hardcoding a sleep unnecessarily delays processing of events, all gets a bit clumsy - better to just use a queue that's designed to support blocking rather than stick to a lock-free queue for no particular reason, e.g. Intel tbb's [concurrent_bounded_queue](http://www.threadingbuildingblocks.org/docs/help/reference/containers_overview/concurrent_bounded_queue_cls.htm). But, if the libraries you're using have no such offerings, sleeping's better than not. – Tony Delroy Aug 05 '14 at 01:56
  • 1
    @quantdev re-reading your answer, it's that "latency sensitive" is too weak a term to match "i.e. waking up...not acceptable" - "latency critical" fits better, but I was a bit harsh and will remove my downvote. Regarding lock-free containers having worse latency - that very quickly gets complicated - always good for people who really have to care to benchmark locking and lock-free alternatives with actual workloads, but it's fair enough to warn that it's not all in lock-free's favour. Cheers. – Tony Delroy Aug 05 '14 at 02:17
  • @TonyD thanks, for now, brining another library into our development just for a lock-free Q might not be welcomed by the team. and the way you are talking about lock-free implementations, it seems they are not going to perform super well(the way I expected from boost).So I will stick to hardcoded wait, let's see how they will perform in the end. I may update this post later with some results. – rahman Aug 06 '14 at 01:43