20

I've been told several times, that I should use std::async for fire & forget type of tasks with the std::launch::async parameter (so it does it's magic on a new thread of execution preferably).

Encouraged by these statements, I wanted to see how std::async is compared to:

  • sequential execution
  • a simple detached std::thread
  • my simple async "implementation"

My naive async implementation looks like this:

template <typename F, typename... Args>
auto myAsync(F&& f, Args&&... args) -> std::future<decltype(f(args...))>
{
    std::packaged_task<decltype(f(args...))()> task(std::bind(std::forward<F>(f), std::forward<Args>(args)...));
    auto future = task.get_future();

    std::thread thread(std::move(task));
    thread.detach();

    return future;
}

Nothing fancy here, packs the functor f into an std::packaged task along with its arguments, launches it on a new std::thread which is detached, and returns with the std::future from the task.

And now the code measuring execution time with std::chrono::high_resolution_clock:

int main(void)
{
    constexpr unsigned short TIMES = 1000;

    auto start = std::chrono::high_resolution_clock::now();
    for (int i = 0; i < TIMES; ++i)
    {
        someTask();
    }
    auto dur = std::chrono::high_resolution_clock::now() - start;

    auto tstart = std::chrono::high_resolution_clock::now();
    for (int i = 0; i < TIMES; ++i)
    {
        std::thread t(someTask);
        t.detach();
    }
    auto tdur = std::chrono::high_resolution_clock::now() - tstart;

    std::future<void> f;
    auto astart = std::chrono::high_resolution_clock::now();
    for (int i = 0; i < TIMES; ++i)
    {
        f = std::async(std::launch::async, someTask);
    }
    auto adur = std::chrono::high_resolution_clock::now() - astart;

    auto mastart = std::chrono::high_resolution_clock::now();
    for (int i = 0; i < TIMES; ++i)
    {
        f = myAsync(someTask);
    }
    auto madur = std::chrono::high_resolution_clock::now() - mastart;

    std::cout << "Simple: " << std::chrono::duration_cast<std::chrono::microseconds>(dur).count() <<
    std::endl << "Threaded: " << std::chrono::duration_cast<std::chrono::microseconds>(tdur).count() <<
    std::endl << "std::sync: " << std::chrono::duration_cast<std::chrono::microseconds>(adur).count() <<
    std::endl << "My async: " << std::chrono::duration_cast<std::chrono::microseconds>(madur).count() << std::endl;

    return EXIT_SUCCESS;
}

Where someTask() is a simple method, where I wait a little, simulating some work done:

void someTask()
{
    std::this_thread::sleep_for(std::chrono::milliseconds(1));
}

Finally, my results:

  • Sequential: 1263615
  • Threaded: 47111
  • std::sync: 821441
  • My async: 30784

Could anyone explain these results? It seems like std::aysnc is much slower than my naive implementation, or just plain and simple detached std::threads. Why is that? After these results is there any reason to use std::async?

(Note that I did this benchmark with clang++ and g++ too, and the results were very similar)

UPDATE:

After reading Dave S's answer I updated my little benchmark as follows:

std::future<void> f[TIMES];
auto astart = std::chrono::high_resolution_clock::now();
for (int i = 0; i < TIMES; ++i)
{
    f[i] = std::async(std::launch::async, someTask);
}
auto adur = std::chrono::high_resolution_clock::now() - astart;

So the std::futures are now not destroyed - and thus joined - every run. After this change in the code, std::async produces similar results to my implementation & detached std::threads.

manlio
  • 18,345
  • 14
  • 76
  • 126
krispet krispet
  • 1,648
  • 1
  • 14
  • 25
  • 4
    I'm sure this is not the issue, but I just have to ask for completeness of information. Are you measuring a debug (unoptimized) or release (optimized) build? I assume an optimized build as otherwise any measurements would be pointless, but I have to ask. – Jesper Juhl May 21 '16 at 12:29
  • @JesperJuhl Totally valid question, but I am measuring with -O2. – krispet krispet May 21 '16 at 12:35

2 Answers2

19

One key difference is that the future returned by async joins the thread when the future is destroyed, or in your case, replaced with a new value.

This means it has to execute someTask() and join the thread, both of which take time. None of your other tests are doing that, where they simply spawn them independently.

Dave S
  • 20,507
  • 3
  • 48
  • 68
  • 7
    "*the future returned by async joins the thread when the future is destroyed*" AKA: the primary reason to never, *ever* use `std::async`. – Nicol Bolas May 21 '16 at 12:32
  • 3
    @KerrekSB: If you wanted that, you should call `future::wait` explicitly, just like you would do for every other `future`. The problem is that the `future` returned by `async` has different behavior from *every other `future` object*. It's the inconsistency that's the problem, not the behavior. The inconsistent, and non-user-reproducible, behavior of this makes it very easy to accidentally do the wrong thing, as was done here. – Nicol Bolas May 21 '16 at 12:46
  • Thank you, this solved the mystery. But still, what is the reason behind this behaviour? Could you point me some article on this matter, or something like that? – krispet krispet May 21 '16 at 12:52
  • @krispetkrispet part of this is answered [in this answer](https://stackoverflow.com/questions/18143661/what-is-the-difference-between-packaged-task-and-async/18143844#18143844) (disclaimer: I wrote that one). [Scott Meyer's also wrote an article](http://scottmeyers.blogspot.de/2013/03/stdfutures-from-stdasync-arent-special.html).. Either way, that part of future was heavily discussed back in 2012/2013, I'm not sure what the final verdict was, though. – Zeta May 21 '16 at 14:44
8

sts::async returns a special std::future. This future has a ~future that does a .wait().

So your examples are fundamentally different. The slow ones actually do the tasks during your timing. The fast ones just queue up the tasks, and forget how to ever know the task is done. As the behaviour of programs that let threads last past the end of main is unpredictable, one should avoid it.

The right way to compare the tasks is to store the resulting future when genersting, and before the timer ends either .wait()/.join() them all, or avoid destroying the objects until after the timer expires. This last case, however, makes the sewuential version look worse than it is.

You do need to join/wait before starting the next test, as otherwise you are stealing resources from their timing.

Note that moved futures remove the wait from the source.

Yakk - Adam Nevraumont
  • 262,606
  • 27
  • 330
  • 524