11

I try to call a function every 1 ms. The problem is, I like to do this with windows. So I tried the multimediatimer API.

Multimediatimer API

Source

idTimer = timeSetEvent( 
     1, 
     0,
     TimerProc, 
     0, 
     TIME_PERIODIC|TIME_CALLBACK_FUNCTION ); 

My result was that most of the time the 1 ms was ok, but sometimes I get the double period. See the little bump at around 1.95ms multimediatimerHistogram http://www.freeimagehosting.net/uploads/8b78f2fa6d.png

My first thought was that maybe my method was running too long. But I measured this already and this was not the case.

Queued Timers API

My next try was using the queud timers API with

hTimerQueue = CreateTimerQueue();
if(hTimerQueue == NULL)
{
printf("Error creating queue: 0x%x\n", GetLastError());
}

BOOL res = CreateTimerQueueTimer(
&hTimer, 
hTimerQueue, 
TimerProc, 
NULL, 
0, 
1,  // 1ms
    WT_EXECUTEDEFAULT);

But also the result was not as expected. Now I get most of the time 2 ms cycletime. queuedTimer http://www.freeimagehosting.net/uploads/2a46259a15.png

Measurement

For measuring the times I used the method QueryPerformanceCounter and QueryPerformanceFrequency.

Question

So now my question is if somebody encountered similar problems under windows and maybe even found a solution?

Thanks.

schoetbi
  • 12,009
  • 10
  • 54
  • 72
  • 2
    +1 because everyone loves neat graphs :) – ereOn Jul 29 '10 at 09:34
  • 4
    You've got pretty high expectations of Windows. – Greg Hewgill Jul 29 '10 at 09:36
  • 1
    @Bob Moore added an answer that contained this [link](http://www.flounder.com/time.htm). The answer has been deleted, but I found the link an interesting read around timming issues in operating systems (the real OS are old windows, but the concepts most probably still apply to current versions) – David Rodríguez - dribeas Jul 29 '10 at 10:12
  • 1
    timeSetEvent() creates a *very* good timer. But it cannot pre-empt a high-priority kernel thread. – Hans Passant Jul 29 '10 at 10:24
  • @Hans: Thanks! that is then the reason for the bump at around 2ms?. Funny is that the the distribution is not more randm but discrete the double amount of time. – schoetbi Jul 29 '10 at 11:04
  • @schoe: that's not my experience, I've only ever seen it jitter. – Hans Passant Jul 29 '10 at 11:12
  • 1
    schoetbi: On XP, you're going to have drivers which disable interrupts for hundreds of milliseconds at a time. Heck, several motherboards will suspend the OS for milliseconds. Trying to get isoch at this level of resolution is very hard on a general purpose OS. Vista and Win7 made it easier to get isoch but they're still not perfect. – Larry Osterman Jul 29 '10 at 19:48
  • where is the graph gone? – Arno Apr 24 '14 at 15:51
  • @Arno: Seems the picture hoster deleted the graphs. – schoetbi Apr 25 '14 at 07:59

3 Answers3

6

Without going to a real-time OS, you cannot expect to have your function called every 1 ms.

On Windows that is NOT a real-time OS (for Linux it is similar), a program that repeatedly read a current time with microsecond precision, and store consecutive differences in an histogram have a non-empty bin for >10 ms! This means that sometimes you will have 2 ms, but you can also get more between your calls.

Didier Trosset
  • 36,376
  • 13
  • 83
  • 122
  • What exactly do you mean with hava a non-empty bin for > 10ms? Do you mean that getting the time takes more than 10ms? – schoetbi Jul 29 '10 at 09:55
  • That is correct. Doing nothing but calling twice a function to get time results in a time difference above 10 ms. Very rare, one time every minute or hour depending on the actual computer, but happens! – Didier Trosset Jul 29 '10 at 09:59
  • A non-real time OS doesn't provides timing guarantees, but that doesn't mean that you are guaranteed to eventually get long delays. It's often possible to tightly control the work such that you have a very high probability, i.e. near certainty, of getting your work done in a timely manner. Although I think the question here has more to do with timer granularity and performance than with real-time. – bames53 Jul 13 '12 at 17:24
1

A call to NtQueryTimerResolution() will return a value for ActualResolution. In your case the actual resolution is almost certainly 0.9765625 ms. This is exactly what you show in the first plot. The second occurace of about 1.95 ms is more precisely Sleep(1) = 1.9531 ms = 2 x 0.9765625 ms

I guess the interrupt period runs at someting close to 1ms (0.9765625).

And now the trouble begins: The timer signals when the desired delay expires.

Say the ActualResolution is set to 0.9765625, the interrupt heartbeat of the system will run at 0.9765625 ms periods or 1024 Hz and a call to Sleep is made with a desired delay of 1 ms. Two scenarios are to be looked at:

  1. The call was made < 1ms (ΔT) ahead of the next interrupt. The next interrupt will not confirm that the desired period of time has expired. Only the following interrupt will cause the call to return. The resulting sleep delay will be ΔT + 0.9765625 ms.
  2. The call was made >= 1ms (ΔT) ahead of the next interrupt. The next interrupt will force the call to return. The resulting sleep delay will be ΔT.

So the result depends a lot on when the call was made and therefore you may observe 0.98ms events as well as 1.95ms events.

Edit: Using the CreateTimerQueueTimer will push the observed delay to 1.95 because the timer tick (interrupt period) is 0.9765625 ms. On the first occurence of the interrupt, the requested duration of 1 ms has not quite expired, thus the TimerProc will only be triggered after the second interrupt (2 x 0.9765625 ms = 1.953125 ms > 1 ms). Consequently, the queueTimer plot shows the peak at 1.953125 ms.

Note: This behavior strongly depends on the underlying hardware.

More details can be found at the Windows Timestamp Project

Arno
  • 4,994
  • 3
  • 39
  • 63
1

You can try to run timeBeginPeriod(1) at the program start and timeEndPeriod(1) before quitting. This probably can enhance timer precision.

n0rd
  • 11,850
  • 5
  • 35
  • 56