5

Recently I discovered that for unknown reason std::this_thread::sleep_for can sleep 10 times longer than intended. It's not just one or few accidental delays, but very noticeable effect that can last for minutes and affects also std::condition_variable::wait_for.

Here is my code:

#include <thread>
#include <iostream>
#include <chrono>

using namespace std;

int main(int argc, char const *argv[])
{
    unsigned t = 0;
    auto start = ::std::chrono::steady_clock::now();
    for(unsigned i = 0; i < 100; ++i) {
        ::std::this_thread::sleep_for(chrono::milliseconds(1));
        t ^= i; // prevent overeager optimization
    }
    auto stop = ::std::chrono::steady_clock::now();
    auto elapsed = ::std::chrono::duration_cast<::std::chrono::milliseconds>(stop - start);
    ::std::cout << elapsed.count() << "\n";
    ::std::cerr << t << "\n";

  return 0;
}

I compiled this with Visual Studio Command line tools as follows:

cl /EHsc /std:c++17 /O2 sleep_for.cpp

When I execute it, the result is sometimes close to 150 and sometimes to 1500. Between executions, nothing changes in the environment. What may be the cause of such effect?

UPDATE 1. It looks like this is an operating system problem. Besides std::thread::sleep_for and std::condition_variable::wait_until, I also tried timers based on uvw library, with same result. Whatever I do, I can't make a thread sleep for less than 15 milliseconds. This effect is very noticeable after system startup, and than it appears more seldom.

SUMMARY. Some folks suggest: read the docs! They warned you! std::this_thread::sleep_for may be unstable! But on the other hand, according to the documentation, there is nothing wrong if it works more stable, one just needs to know how to achieve that. Jeremy Friesner proposed a solution, I've tested it and it worked out, showing 10 times better performance and a good average stability. Whether to use std::this_thread::sleep_for and how to do that, you deside. I'd definitely not recommend using it if you are developing a life-saving equipment etc. But in many cases it may be pretty useful.

A good article that discusses the price of increased stability and resolution: https://randomascii.wordpress.com/2013/07/08/windows-timer-resolution-megawatts-wasted/

Fedorov7890
  • 1,173
  • 13
  • 28
  • 2
    When you give up your CPU quantum you are not sure how soon you'll get it back; the time you ask for is just a hint. If the system is busy, you may have to wait arbitrarily long. – Matteo Italia Feb 02 '19 at 12:31
  • 2
    Sleeping a thread is somewhat of an anti-pattern. What are you actually trying to do here? What problem are you trying to solve? See also: [Windows 7: overshoot C++ std::this_thread::sleep_for](https://stackoverflow.com/questions/32904371/windows-7-overshoot-c-stdthis-threadsleep-for) `sleep_for` in Microsoft's CRT is based on the system clock, so if the clock changes, you may see longer sleep times. Are you *sure* "nothing changes in the environment"? – Cody Gray - on strike Feb 02 '19 at 12:31
  • @ Cody Gray♦ That is a very interesting link. At least I know that someone else encountered same problem. – Fedorov7890 Feb 02 '19 at 12:37
  • 1
    The only guarantee you get is that the thread will sleep for *at least* as long as you ask for. It may sleep longer and you get *no* guarantees about an upper bound on "longer". – Jesper Juhl Feb 02 '19 at 15:44
  • 4
    It might be worth trying a call to `timeBeginPeriod(1)` at the top of your `main()` to see if that improves things any. – Jeremy Friesner Feb 02 '19 at 16:48
  • @ Jeremy Friesner That works! I created two instances of the above mentioned program, second has timeBeginPeriod(1) as you suggested and it performs 10 times faster! I'd appreciate if you make an answer out of your comment so that I can mark it as accepted. – Fedorov7890 Feb 02 '19 at 17:16

2 Answers2

5

This answer is created from Jeremy Friesner's comment. He suggests to try timeBeginPeriod(1) at the beginning of the main(). That worked out! I created a modification of the program, and it runs 10 times faster. Here is the code:

#include <thread>
#include <iostream>
#include <chrono>
#include <windows.h>

using namespace std;

int main(int argc, char const *argv[])
{
    timeBeginPeriod(1);
    unsigned t = 0;
    auto start = ::std::chrono::steady_clock::now();
    for(unsigned i = 0; i < 100; ++i) {
        ::std::this_thread::sleep_for(chrono::milliseconds(1));
        t ^= i; // prevent overeager optimization
    }
    auto stop = ::std::chrono::steady_clock::now();
    auto elapsed = ::std::chrono::duration_cast<::std::chrono::milliseconds>(stop - start);
    ::std::cout << elapsed.count() << "\n";
    ::std::cerr << t << "\n";

  return 0;
}

It can be compiled using the following command:

cl /EHsc /std:c++17 /O2 sleep_for.cpp winmm.lib

Thanks, Jeremy!

P.S. Increasing timer frequency has its price: https://randomascii.wordpress.com/2013/07/08/windows-timer-resolution-megawatts-wasted/

Fedorov7890
  • 1,173
  • 13
  • 28
2

Looking at the documentation:

Blocks the execution of the current thread for at least the specified sleep_duration. This function may block for longer than sleep_duration due to scheduling or resource contention delays.

JVApen
  • 11,008
  • 5
  • 31
  • 67
  • Formally it's correct but it doesn't explain the cause. It also does not help much, except that makes me to rethink the design and to seek for an alternative to std::sleep_for – Fedorov7890 Feb 02 '19 at 12:44
  • You can think of `sleep_for` as closer to `yield_for_at_least`, @Fedorov7890. But yes, you should definitely rethink whatever design relies on threads sleeping for deterministic amounts of time. – Cody Gray - on strike Feb 02 '19 at 12:51
  • 3
    I'd say this answers your question. It doesn't give you what you want to know, but that's the fault of not asking precise questions... – Ulrich Eckhardt Feb 02 '19 at 12:54
  • @ Ulrich Eckhardt Yes, it does. But as ob1 said, "this is not the answer you're looking for" – Fedorov7890 Feb 02 '19 at 13:22