16

I recently heard new c++ standard features which are:

  1. std::latch
  2. std::barrier

I cannot figure it out ,in which situations that they are applicable and useful over one-another.

  • If someone can raise an example for how to use each one of them wisely it would be really helpful.
Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
Buddhika Chaturanga
  • 957
  • 2
  • 14
  • 29
  • 7
    From the linked `latch` reference: "There is no possibility to increase or reset the counter, which makes the latch a single-use barrier." And from the linked `barrier` reference: "Unlike `std::experimental::latch`, barriers are reusable; once the participating threads are released from a barrier's synchronization point, they can reuse the same barrier." So do you need to reuse the barrier, or could it be a one-time thing? – Some programmer dude Feb 26 '18 at 10:17
  • @Someprogrammerdude reusing barrier ... – Buddhika Chaturanga Feb 26 '18 at 10:19
  • 4
    I hope this link might help you http://www.modernescpp.com/index.php/latches-and-barriers –  Feb 26 '18 at 10:26
  • 2
    "[...] in which situations that they are applicable and useful [...]" Probably none the way they are defined now: `std::latch` has the problem that it requires one explicit synchronization to provide a single explicit synchronization, which makes it sort of pointless. `std::barrier` has the problem that the semantics of the first and the second call to `arrive_and_wait()` differ: The first time, it behaves like a `std::latch`, counting down for any thread that may call it. After that, the set of participating threads is fixed and no other thread can join it or replace an existing thread. – cmaster - reinstate monica Feb 26 '18 at 15:33
  • @cmaster thanks for the info ,:) really helpful. – Buddhika Chaturanga Feb 27 '18 at 04:57
  • @cmaster-reinstatemonica I don't understand what your objection to `std::latch` means. What do you mean by "explicit synchronization"? A latch *is* an explicit synchronization mechanism so your comment seems circular. Do you mean you have to use it with *another* mechanism? I don't think that's true, but even if it were then I don't think it would make it useless e.g. `std::condition_variable` can only be used in combination with a mutex, that doesn't mean it isn't useful. Is there a problem with the example use of `std::latch` in the answer I gave? – Arthur Tacca Jun 29 '20 at 04:53
  • @cmaster-reinstatemonica As for `std::barrier`, almost application will use the same set of threads for every use of it anyway. For the rare ones that don't, you can just create a fresh object each time (e.g. with a shared pointer to manage lifetime), which admittedly is slightly inconvenient but hardly makes it useless. – Arthur Tacca Jun 29 '20 at 04:54
  • @ArthurTacca Yes, the circularity is exactly the point of my comment: In order to use a `std::latch`, you need to construct it first. Construction can only happen within a single thread. And after construction, the identity of the `std::latch` must be revealed to the other participating threads in a non-racy fashion. I.e., you need to synchronize your threads somehow to set them up in such a way that they can use the `std::latch` to synchronize. That defeats the purpose of the `std::latch`, imho. – cmaster - reinstate monica Jun 29 '20 at 07:08
  • @ArthurTacca As to `std::barrier`, yes, you *can* use it. However, barriers are very expensive, so expensive that you want to avoid their use at all costs anyways: If you sync 100 threads with a barrier, and one thread is just 1ms late (because it was interrupted by the kernel for whatever reasons), you've just burned 99ms CPU time. And that's before considering overhead. Successful parallelization relies on lightweight, fine-grained locks that are typically not contented. No barrier will ever offer that. – cmaster - reinstate monica Jun 29 '20 at 07:14
  • @cmaster-reinstatemonica Thanks for replying. I agree that the situation you describe is a bad use of a barrier. Any synchoronisation primitive will cause performance problems if used incorrectly, so it's certainly right to point out those sorts of risks. – Arthur Tacca Jun 29 '20 at 07:34
  • @ArthurTacca After reading your nice, detailed answer, yes you show a use case where `std::latch` seems *convenient*. What I said still applies, though: You are creating the latch in the master, then you push its pointer into the work item queue which does the first synchronization, and finally you use the latch to sync your results back. That's all nice and swell. It's rather expensive from a performance point of view, though: Taking stuff out of the queue potentially contents a lock across all your worker threads. That explodes when you have many workers and small tasks. – cmaster - reinstate monica Jun 29 '20 at 07:36
  • @cmaster-reinstatemonica That's true, but it sounds like a criticism of using a queue to distribute work to threads rather than a criticism of latches. Using a queue to distribute work to threads is definitely something that people do! You're right there are situations that it might not be appropriate (e.g. the work item is so tiny that they take less time than the queue lock) but there are mitigations for this (e.g. lock free queue or batching multiple work items per queue entry) and latches really have no affect on that. Plus, there are situations where it's just not a problem to begin with. – Arthur Tacca Jun 29 '20 at 10:10
  • @ArthurTacca Yes, it's indeed a more fundamental criticism of approaches that perform more synchronization that strictly necessary. If you sync data production/consumption with a barrier, you are forcing the producer to wait on the consumer needlessly. If you sync workers with a barrier, all your workers will have to wait for the last one to arrive. If you use a work distributing queue, workers are going to contest for a single lock or atomic variable. The best communications patterns are those where a single sender sends a message to a single receiver in a non-blocking fashion. – cmaster - reinstate monica Jun 29 '20 at 10:34

1 Answers1

14

Very short answer

They're really aimed at quite different goals:

  • Barriers are useful when you have a bunch of threads and you want to synchronise across of them at once, for example to do something that operates on all of their data at once.
  • Latches are useful if you have a bunch of work items and you want to know when they've all been handled, and aren't necessarily interested in which thread(s) handled them.

Much longer answer

Barriers and latches are often used when you have a pool of worker threads that do some processing and a queue of work items that is shared between. It's not the only situation where they're used, but it is a very common one and does help illustrate the differences. Here's some example code that would set up some threads like this:

const size_t worker_count = 7; // or whatever
std::vector<std::thread> workers;
std::vector<Proc> procs(worker_count);
Queue<std::function<void(Proc&)>> queue;
for (size_t i = 0; i < worker_count; ++i) {
    workers.push_back(std::thread(
        [p = &procs[i], &queue]() {
            while (auto fn = queue.pop_back()) {
                fn(*p);
            }
        }
    ));
}

There are two types that I have assumed exist in that example:

  • Proc: a type specific to your application that contains data and logic necessary to process work items. A reference to one is passed to each callback function that's run in the thread pool.
  • Queue: a thread-safe blocking queue. There is nothing like this in the C++ standard library (somewhat surprisingly) but there are a lot of open-source libraries containing them e.g. Folly MPMCQueue or moodycamel::ConcurrentQueue, or you can build a less fancy one yourself with std::mutex, std::condition_variable and std::deque (there are many examples of how to do this if you Google for them).

Latch

A latch is often used to wait until some work items you push onto the queue have all finished, typically so you can inspect the result.

std::vector<WorkItem> work = get_work();
std::latch latch(work.size());
for (WorkItem& work_item : work) {
    queue.push_back([&work_item, &latch](Proc& proc) {
        proc.do_work(work_item);
        latch.count_down();
    });
}
latch.wait();
// Inspect the completed work

How this works:

  1. The threads will - eventually - pop the work items off of the queue, possibly with multiple threads in the pool handling different work items at the same time.
  2. As each work item is finished, latch.count_down() is called, effectively decrementing an internal counter that started at work.size().
  3. When all work items have finished, that counter reaches zero, at which point latch.wait() returns and the producer thread knows that the work items have all been processed.

Notes:

  • The latch count is the number of work items that will be processed, not the number of worker threads.
  • The count_down() method could be called zero times, one time, or multiple times on each thread, and that number could be different for different threads. For example, even if you push 7 messages onto 7 threads, it might be that all 7 items are processed onto the same one thread (rather than one for each thread) and that's fine.
  • Other unrelated work items could be interleaved with these ones (e.g. because they weree pushed onto the queue by other producer threads) and again that's fine.
  • In principle, it's possible that latch.wait() won't be called until after all of the worker threads have already finished processing all of the work items. (This is the sort of odd condition you need to look out for when writing threaded code.) But that's OK, it's not a race condition: latch.wait() will just immediately return in that case.
  • An alternative to using a latch is that there's another queue, in addition to the one shown here, that contains the result of the work items. The thread pool callback pushes results on to that queue while the producer thread pops results off of it. Basically, it goes in the opposite direction to the queue in this code. That's a perfectly valid strategy too, in fact if anything it's more common, but there are other situations where the latch is more useful.

Barrier

A barrier is often used to make all threads wait simultaneously so that the data associated with all of the threads can be operated on simultaneously.

typedef Fn std::function<void()>;
Fn completionFn = [&procs]() {
    // Do something with the whole vector of Proc objects
};
auto barrier = std::make_shared<std::barrier<Fn>>(worker_count, completionFn);
auto workerFn = [barrier](Proc&) {
    barrier->count_down_and_wait();
};
for (size_t i = 0; i < worker_count; ++i) {
    queue.push_back(workerFn);
}

How this works:

  1. All of the worker threads will pop one of these workerFn items off of the queue and call barrier.count_down_and_wait().
  2. Once all of them are waiting, one of them will call completionFn() while the others continue to wait.
  3. Once that function completes they will all return from count_down_and_wait() and be free to pop other, unrelated, work items from the queue.

Notes:

  • Here the barrier count is the number of worker threads.
  • It is guaranteed that each thread will pop precisely one workerFn off of the queue and handle it. Once a thread has popped one off of the queue, it will wait in barrier.count_down_and_wait() until all the other copies of workerFn have been popped off by other threads, so there is no chance of it popping another one off.
  • I used a shared pointer to the barrier so that it will be destroyed automatically once all the work items are done. This wasn't an issue with the latch because there we could just make it a local variable in the producer thread function, because it waits until the worker threads have used the latch (it calls latch.wait()). Here the producer thread doesn't wait for the barrier so we need to manage the memory in a different way.
  • If you did want the original producer thread to wait until the barrier has been finished, that's fine, it can call count_down_and_wait() too, but you will obviously need to pass worker_count + 1 to the barrier's constructor. (And then you wouldn't need to use a shared pointer for the barrier.)
  • If other work items are being pushed onto the queue at the same time, that's fine too, although it will potentially waste time as some threads will just be sitting there waiting for the barrier to be acquired while other threads are distracted by other work before they acquire the barrier.

!!! DANGER !!!

The last bullet point about other working being pushed onto the queue being "fine" is only the case if that other work doesn't also use a barrier! If you have two different producer threads putting work items with a barrier on to the same queue and those items are interleaved, then some threads will wait on one barrier and others on the other one, and neither will ever reach the required wait count - DEADLOCK. One way to avoid this is to only ever use barriers like this from a single thread, or even to only ever use one barrier in your whole program (this sounds extreme but is actually quite a common strategy, as barriers are often used for one-time initialisation on startup). Another option, if the thread queue you're using supports it, is to atomically push all work items for the barrier onto the queue at once so they're never interleaved with any other work items. (This won't work with the moodycamel queue, which supports pushing multiple items at once but doesn't guarantee that they won't be interleved with items pushed on by other threads.)

Barrier without completion function

At the point when you asked this question, the proposed experimental API didn't support completion functions. Even the current API at least allows not using them, so I thought I should show an example of how barriers can be used like that too.

auto barrier = std::make_shared<std::barrier<>>(worker_count);
auto workerMainFn = [&procs, barrier](Proc&) {
    barrier->count_down_and_wait();
    // Do something with the whole vector of Proc objects
    barrier->count_down_and_wait();
};
auto workerOtherFn = [barrier](Proc&) {
    barrier->count_down_and_wait();  // Wait for work to start
    barrier->count_down_and_wait();  // Wait for work to finish
}
queue.push_back(std::move(workerMainFn));
for (size_t i = 0; i < worker_count - 1; ++i) {
    queue.push_back(workerOtherFn);
}

How this works:

The key idea is to wait for the barrier twice in each thread, and do the work in between. The first waits have the same purpose as the previous example: they ensure any earlier work items in the queue are finished before starting this work. The second waits ensure that any later items in the queue don't start until this work has finished.

Notes:

The notes are mostly the same as the previous barrier example, but here are some differences:

  • One difference is that, because the barrier is not tied to the specific completion function, it's more likely that you can share it between multiple uses, like we did in the latch example, avoiding the use of a shared pointer.
  • This example makes it look like using a barrier without a completion function is much more fiddly, but that's just because this situation isn't well suited to them. Sometimes, all you need is to reach the barrier. For example, whereas we initialised a queue before the threads started, maybe you have a queue for each thread but initialised in the threads' run functions. In that case, maybe the barrier just signifies that the queues have been initialised and are ready for other threads to pass messages to each other. In that case, you can use a barrier with no completion function without needing to wait on it twice like this.
  • You could actually use a latch for this, calling count_down() and then wait() in place of count_down_and_wait(). But using a barrier makes more sense, both because calling the combined function is a little simpler and because using a barrier communicates your intention better to future readers of the code.
  • Any any case, the "DANGER" warning from before still applies.
Arthur Tacca
  • 8,833
  • 2
  • 31
  • 49