3

I need to execute some function accurate in 20 milliseconds (for RTP packets sending) after some event. I have tried next variants:

std::this_thread::sleep_for(std::chrono::milliseconds(20));
boost::this_thread::sleep_for(std::chrono::milliseconds(20));
Sleep(20);

Also different perversions as:

auto a= GetTickCount();
while ((GetTickCount() - a) < 20) continue;

Also tried micro and nanoseconds.
All this methods have error in range from -6ms to +12ms but its not acceptable. How to make it work right?

My opinion, that +-1ms is acceptable, but no more.

UPDATE1: to measure time passed i use std::chrono::high_resolution_clock::now();

Dmitry
  • 260
  • 3
  • 13
  • Are you using a thread and a timer to do this? – ChrisF Apr 14 '16 at 12:48
  • Are you sure the timer to measure the time waited is accurate? – MikeCAT Apr 14 '16 at 12:50
  • Maybe [this answer](http://stackoverflow.com/questions/16299029/resolution-of-stdchronohigh-resolution-clock-doesnt-correspond-to-measureme) on SO can be of help. –  Apr 14 '16 at 12:54
  • @ChrisF now im testing it in empty project in one(main) thread. – Dmitry Apr 14 '16 at 12:54
  • @MikeCAT i found out, that GetTickCount() is not accurate. But in first place i used std::chrono::high_resolution_clock::now(); – Dmitry Apr 14 '16 at 12:56
  • I was timing some simple thing the other day, and I had an accuracy of about 0.1ms, on windows7. I used `chrono::steady_clock`, which is one of the best choices for accuraty timing, I think. – antiHUMAN Apr 14 '16 at 12:57
  • @antiHUMAN i`ll check this – Dmitry Apr 14 '16 at 12:57
  • You get good accuracy and you are unlikely to get any significant improvement in user mode app ([some background info](https://blogs.msdn.microsoft.com/mediasdkstuff/2009/07/02/why-are-the-multimedia-timer-apis-timesetevent-not-as-accurate-as-i-would-expect/) - "This is just the way the OS works"). I suppose you will end up understanding that you don't need better accuracy and this is already sufficient, especially for RTP. – Roman R. Apr 14 '16 at 13:01
  • @RomanR. i have tried with such delay with audio. Its bad. – Dmitry Apr 14 '16 at 13:09
  • You perhaps do something wrong elsewhere then. Real time audio apps are doing RTP without reaching timer accuracy of ±1 ms. – Roman R. Apr 14 '16 at 13:10
  • @Dmitry Looks like you are programming on Windows (GetTickCount). Windows has a multimedia timer API that is made for this: https://msdn.microsoft.com/de-de/library/windows/desktop/dd742877%28v=vs.85%29.aspx. – Jens Apr 15 '16 at 07:43

3 Answers3

3

Briefly, because of how OS kernels manage time and threads, you won't get accuracy much better with that method. Also, you can't rely on sleep alone with a static interval or your stream will quickly drift off your intended send clock rate, because the thread could be interrupted or it could be scheduled again well after your sleep time... for this reason you should check the system clock to know how much to sleep for at each iteration (i.e. somewhere between 0ms and 20ms). Without going into too much detail, this is also why there's a jitter buffer in RTP streams... to account for variations in packet reception (due to network jitter or send jitter). Because of this, you likely won't need +/-1ms level accuracy anyway.

mark
  • 5,269
  • 2
  • 21
  • 34
1

In C we have a nanosleep function in time.h.

The nanosleep() function causes the current thread to be suspended from execution until either the time interval specified by the rqtp argument has elapsed or a signal is delivered to the calling thread and its action is to invoke a signal-catching function or to terminate the process.

This below program sleeps for 20 milli seconds.

int main()
{
   struct timespec tim, tim2;
   tim.tv_sec = 0;
   tim.tv_nsec =20000000;//20 milliseconds converted to nano seconds

   if(nanosleep(&tim , NULL) < 0 )   
   {
      printf("Nano sleep system call failed \n");
      return -1;
   }

   printf("Nano sleep successfull \n");
 return 0;
}
Vijay
  • 65,327
  • 90
  • 227
  • 319
  • 2
    What platform is this for? – Dmitry Apr 14 '16 at 13:03
  • 1
    While this code may answer the question, providing additional context regarding why and/or how it answers the question would significantly improve its long-term value. Please [edit] your answer to add some explanation. – CodeMouse92 Apr 15 '16 at 02:38
1

Using std::chrono::steady_clock, I got about 0.1ms accuracy on windows 7.

That is, simply:

auto a = std::chrono::steady_clock::now();
while ((std::chrono::steady_clock::now() - a) < WAIT_TIME) continue;

This should give you accurate "waiting" (about 0.1ms, as I said), at least. We all know that this kind of waiting is "ugly" and should be avoided, but it's a hack that might still do the trick just fine.

You could use high_resolution_clock, which might give even better accuracy for some systems, but it is not guaranteed not to be adjusted by the OS, and you don't want that. steady_clock is supposed to be guaranteed not to be adjusted, and often has the same accuracy as high_resolution_clock.

As for "sleep()" functions that are very accurate, I don't know. Perhaps someone else knows more about that.

antiHUMAN
  • 250
  • 2
  • 11
  • You are using `this_thread::sleep_for(std::chrono::milliseconds(n));` or something else? – Dmitry Apr 14 '16 at 13:05
  • No, I wasn't using it for sleeping, but for measuring execution speed of some code. The point being that you have the accuracy you need if you use this. When it comes to accuracy "sleeps", I don't know, but they can be avoided, right? Your second version would give you the accuracy you need. – antiHUMAN Apr 14 '16 at 13:07