1

Currently I am coding a project that requires precise delay times over a number of computers. Currently this is the code I am using I found it on a forum. This is the code below.

{
    LONGLONG timerResolution;
    LONGLONG wantedTime;
    LONGLONG currentTime;

    QueryPerformanceFrequency((LARGE_INTEGER*)&timerResolution);
    timerResolution /= 1000;

    QueryPerformanceCounter((LARGE_INTEGER*)&currentTime);
    wantedTime = currentTime / timerResolution + ms;
    currentTime = 0;
    while (currentTime < wantedTime)
    {
        QueryPerformanceCounter((LARGE_INTEGER*)&currentTime);
        currentTime /= timerResolution;
    }
}

Basically the issue I am having is this uses alot of CPU around 16-20% when I start to call on the function. The usual Sleep(); uses Zero CPU but it is extremely inaccurate from what I have read from multiple forums is that's the trade-off when you trade accuracy for CPU usage but I thought I better raise the question before I set for this sleep method.

  • 2
    It is better to say how much accurate do you want to be ? Let's say maximum 4ms miss is acceptable for the program, 4 us, 4 ns,4 ps ? Also it depends on hardware which the program is running on. Could you take a look, https://stackoverflow.com/questions/13397571/precise-thread-sleep-needed-max-1ms-error maybe it is what you are looking for. – calynr Feb 10 '20 at 07:56
  • Maybe [std::this_thread::sleep_for](https://en.cppreference.com/w/cpp/thread/sleep_for) is what you are looking for? – Jesper Juhl Feb 10 '20 at 08:37
  • I agree that this misses a little context: What OS are you running and what range are you aiming for? – Frederik Juul Feb 10 '20 at 08:39
  • 1
    @FrederikJuul OP could have stated it explicitly. However, AFAIK, [QueryPerformanceCounter()](https://learn.microsoft.com/en-us/windows/win32/api/profileapi/nf-profileapi-queryperformancecounter) is part of the Windows API. Hence, OP is surely on Windows. And, using the Windows API, portability is probably not an intention. – Scheff's Cat Feb 10 '20 at 08:49
  • @Scheff Good catch - I think you're right. – Frederik Juul Feb 10 '20 at 08:56
  • @JesperJuhl At least on Visual Studio, the standard function uses the operating system `Sleep()`, and therefore cannot be more accurate than it. – VLL Feb 10 '20 at 09:35

2 Answers2

1

The reason why it's using 15-20% CPU is likely because it's using 100% on one core as there is nothing in this to slow it down.

In general, this is a "hard" problem to solve as PCs (more specifically, the OSes running on those PCs) are in general not made for running real time applications. If that is absolutely desirable, you should look into real time kernels and OSes.

For this reason, the guarantee that is usually made around sleep times is that the system will sleep for atleast the specified amount of time.

If you are running Linux you could try using the nanosleep method (http://man7.org/linux/man-pages/man2/nanosleep.2.html) Though I don't have any experience with it.

Alternatively you could go with a hybrid approach where you use sleeps for long delays, but switch to polling when it's almost time:

#include <thread>
#include <chrono>
using namespace std::chrono_literals;

...

wantedtime = currentTime / timerResolution + ms;
currentTime = 0;
while(currentTime < wantedTime)
{
    QueryPerformanceCounter((LARGE_INTEGER*)&currentTime);
    currentTime /= timerResolution;
    if(currentTime-wantedTime > 100) // if waiting for more than 100 ms
    {
       //Sleep for value significantly lower than the 100 ms, to ensure that we don't "oversleep"
        std::this_thread::sleep_for(50ms); 
    }
}

Now this is a bit race condition prone, as it assumes that the OS will hand back control of the program within 50ms after the sleep_for is done. To further combat this you could turn it down (to say, sleep 1ms).

Frederik Juul
  • 181
  • 1
  • 11
  • 1
    I wouldn't worry too much about `sleep_for` not returning soon enough - the OS is free to take your thread away even if you're spinning. You shouldn't expect realtime accuracy from a non-realtime OS in the first place. – Luaan Feb 21 '20 at 09:18
1

You can set the Windows timer resolution to minimum (usually 1 ms), to make Sleep() accurate up to 1 ms. By default it would be accurate up to about 15 ms. Sleep() documentation.

Note that your execution can be delayed if other programs are consuming CPU time, but this could also happen if you were waiting with a timer.

#include <timeapi.h>

// Sleep() takes 15 ms (or whatever the default is)
Sleep(1);

TIMECAPS caps_;
timeGetDevCaps(&caps_, sizeof(caps_));
timeBeginPeriod(caps_.wPeriodMin);

// Sleep() now takes 1 ms
Sleep(1);

timeEndPeriod(caps_.wPeriodMin);
VLL
  • 9,634
  • 1
  • 29
  • 54