1

I am implementing a timer and need it to run every 50 ms or so and would like the resolution to be 1 ms or less. I started by reading these two articles:

http://www.codeproject.com/Articles/1236/Timers-Tutorial

http://www.virtualdub.org/blog/pivot/entry.php?id=272

Oddly enough they seem to contradict one another. One says queue timers are good for high resolution, the other posts results from a Windows 7 system showing resolution around 15ms (not good enough for my application).

So I ran a test on my system (Win7 64bit i7-4770 CPU @3.4 Ghz). I started at a period of 50ms and this is what I see (time since beginning on left, gap between executions on right; all in ms):

150   50.00
200   50.01
250   50.00
...
450   49.93
500   50.00
550   50.03
...
2250  50.10
2300  50.01

I see that the maximum error is about 100 us and that the average error is probably around 30 us or so. This makes me fairly happy.

So I started dropping the period to see at what point it gets unreliable. I started seeing bad results once I decreased the period <= 5ms.

With a period of 5ms it was not uncommon to see some periods jump between 3 and 6ms every few seconds. If I reduce the period to 1ms periods of 5 to 10 to 40 ms can be seen. I presume that the jumps up to 40ms may be due to the fact that I'm printing stuff to the screen, I dunno.

This is my timer callback code:

VOID CALLBACK timer_execute(PVOID p_parameter, 
   BOOLEAN p_timer_or_wait_fired)
{ 
   LARGE_INTEGER l_now_tick;

   QueryPerformanceCounter(&l_now_tick);

   double now = ((l_now_tick.QuadPart - d_start.QuadPart) * 1000000) / d_frequency.QuadPart;
   double us = ((l_now_tick.QuadPart - d_last_tick.QuadPart) * 1000000) / d_frequency.QuadPart;

   //printf("\n%.0f\t%.2f", now / 1000.0f, ms / 1000.0f);

   if (us > 2000 ||
       us < 100)
   {
      printf("\n%.2f", us / 1000.0f);
   }

   d_last_tick = l_now_tick;
}

Anyways it looks to me as if queue timers are very good tools so long as you're executing at 100hz or slower. Are the bad results posted in the second article I linked to (accuracy of 15ms or so) possibly due to a slower CPU, or a different config?

I'm wondering if I can expect this kind of performance across multiple machines (all as fast or faster than my machine running 64bit Win7)? Also, I noticed that if your callback doesn't exit before the period elapsed, the OS will put another thread in there. This may be obvious, but it didn't stand out to me in any documentation and has significant implications for the client-code.

Ian
  • 4,169
  • 3
  • 37
  • 62
  • I am also searching over this and can say that CreateWaitableTimer-SetWaitableTimer may serve much more precision, it has a parameter with LARGE_INTEGER which has 100ns intervals. Have you tried it? (ok it has been a long time from your entry but you may remember.) – tolgayilmaz Oct 07 '18 at 00:17

2 Answers2

2

The Windows default timer resolution is 15.625 ms. That is the granularity you observe. However, the system timer resolution can be modified as described by MSDN: Obtaining and Setting Timer Resolution. This allows to reduce the granularity to about 1 ms on most platforms. This SO answer discloses how to obtain the current system timer resolution.

The hidden function NtSetTimerResolution(...) even allows to set the timer resolution to 0.5 ms when supported by the platform. See this SO answer to the question "How to setup timer resolution to 0.5 ms?"

...a different config? It depends on underlying hardware and OS version. Check the timer resolution with the tooles mentioned above.

...all as fast or faster than my machine running 64bit Win7)? Yes you can. However, other applications are also allowed to set the timer resolution. Google Chrome is a known example. Such other application may also only temporarily change the timer resolution. Therefore you can never rely on the timer resolution being a constant across platforms/time. The only way to be sure that the timer resolution is controlled by your application is to set the timer granularity to the minimum of 1 ms (0.5ms) by yourself.

Note: Reducing the system timer granularity causes the systems interrupt frequency to increase. It reduces the thread quantum (time slice) and increases the power consumption.

Community
  • 1
  • 1
Arno
  • 4,994
  • 3
  • 39
  • 63
  • In my specific case I'm running the Prepar3d flight simulator application with my software alongside. I'm guessing P3d messes with the timer resolution. Setting the timer resolution to 46.875 (3 * 15.625) would seem to give me the execution period I need no matter what has happened to the timer resolution (unless an app has decreased the resolution to like 20ms). – Ian Nov 25 '14 at 15:32
  • @Ian: The highest timer resolution requested by any application will be active. No other application can reduce the resolution.Thus there is no guarantee that your selected 3 x 15.625 ms stays no matter what. Another apllication may - also only temporarily - go for a higher resolution. Have you checked the actual resolution? I'd suspect the resolution being set higher than 46.875 ms by Lockheed Martins Prepare3D – Arno Nov 25 '14 at 18:31
  • At Arno: Oops, I meant setting my event period to 46.875ms, not setting the OS timer resolution to that. Then, the only time the timer period would be too far off is if another app set the resolution to 10ms or something and that was the timer resolution, but I'll see what happens on our production PC's. To be safe I may force the resolution to 5ms. Thanks. BTW: When I try to use the @ symbol for @Arno, it disappears from the comment when I save? – Ian Nov 25 '14 at 20:00
0

I believe the differences are because of the resource management used in the system. I just learnt about this in a presentation I had to do for my operating systems class. Since there are many processes running it might not be able to queue the process fast enough when the time is too short. In the other hand when it has more time then the process gets queued in time and also it has to do with priority. I hope this was somewhat helpful.