5

I'd like to characterize the accuracy of a software timer. I'm not concerned so much about HOW accurate it is, but do need to know WHAT the accuracy is.

I've investigated c function clock(), and WinAPI's function QPC and timeGetTime, and I know that they're all hardware dependent.

I'm measuring a process that could take around 5-10 seconds, and my requirements are simple: I only need 0.1 second precision (resolution). But I do need to know what the accuracy is, worst-case.

while more accuracy would be preferred, I would rather know that the accuracy was poor (500ms) and account for it, than to believe that the accuracy was better (1 mS) but not be able to document it.

Does anyone have suggestions on how to characterize software clock accuracy?

Thanks

rob johnson
  • 51
  • 1
  • 2
  • Make a tight loop calling a time function and see how much the timer steps by when it changes. – brian beuning Aug 21 '13 at 00:15
  • @brianbeuning: I'm apparently stealing your idea, although your brainpower transmission is clearly faster than your typing, as I had already started writing it when your comment turned up. – Mats Petersson Aug 21 '13 at 00:17
  • I don't know of anyone who uses software timers on a preemptive, multithreaded, desktop OS. The concept is a bit alien. – Martin James Aug 21 '13 at 00:21
  • @mats I knew I should leave my tin foil hat on :) – brian beuning Aug 21 '13 at 00:22
  • Try lead-foil next time - a bit heavier, and prevents radiation better... ;) – Mats Petersson Aug 21 '13 at 00:28
  • 1
    I hope you're not only asking yourself about the precision of the timer but also whether the measurement is giving you the information you want, since your thread could get pre-empted between completing your task and querying the timer, etc. – Ben Voigt Aug 21 '13 at 00:34
  • See some details in [Are Timers and Loops in .Net accurate?](http://stackoverflow.com/a/11537483/1504523) and maybe [Windows 7 timing functions ...](http://stackoverflow.com/a/11743614/1504523). – Arno Aug 21 '13 at 08:42
  • @MartinJames et.al., all i need a gross measurement for a process and verify the upper limit of allowable time limit of 10 sec is not exceeded. if clock() is accurate to +/- 500ms worst case counting latency, then i'll just reduce my upper allowable limit by that amount. i dont want to have to use a counter/timer with awesome precision and accuracy and costs $1000 - $2000 dollars and need calibration and all that jazz. this is just a gross test to screen out clearly defective parts. but it must be quantifiable and not based on ad hoc estimates or anecdotal experience. – rob johnson Aug 21 '13 at 18:29
  • thanks @Arno, i will look at the links you gave here and in teh comments to Hans' answer.... right after lunch :-D – rob johnson Aug 21 '13 at 18:32

3 Answers3

11

You'll need to distinguish accuracy, resolution and latency.

clock(), GetTickCount and timeGetTime() are derived from a calibrated hardware clock. Resolution is not great, they are driven by the clock tick interrupt which ticks by default 64 times per second or once every 15.625 msec. You can use timeBeginPeriod() to drive that down to 1.0 msec. Accuracy is very good, the clock is calibrated from a NTP server, you can usually count on it not being off more than a second over a month.

QPC has a much higher resolution, always better than one microsecond and as little as half a nanosecond on some machines. It however has poor accuracy, the clock source is a frequency picked up from the chipset somewhere. It is not calibrated and has typical electronic tolerances. Use it only to time short intervals.

Latency is the most important factor when you deal with timing. You have no use for a highly accurate timing source if you can't read it fast enough. And that's always an issue when you run code in user mode on a protected mode operating system. Which always has code that runs with higher priority than your code. Particularly device drivers are trouble-makers, video and audio drivers in particular. Your code is also subjected to being swapped out of RAM, requiring a page-fault to get loaded back. On a heavily loaded machine, not being able to run your code for hundreds of milliseconds is not unusual. You'll need to factor this failure mode into your design. If you need guaranteed sub-millisecond accuracy then only a kernel thread with real-time priority can give you that.

A pretty decent timer is the multi-media timer you get from timeSetEvent(). It was designed to provide good service for the kind of programs that require a reliable timer. You can make it tick at 1 msec, it will catch up with delays when possible. Do note that it is an asynchronous timer, the callback is made on a separate worker thread so you have to be careful taking care of proper threading synchronization.

Hans Passant
  • 922,412
  • 146
  • 1,693
  • 2,536
  • Clock drift is typically in the range of a few ppm. That's a few microseconds per second. This will build a total drift of many seconds per month and microsoft states in [Support boundary to configure the Windows Time service for high accuracy environments](http://support.microsoft.com/kb/939322/en-us) that _"The W32Time service cannot reliably maintain sync time to the range of 1 to 2 seconds. Such tolerances are outside the design specification of the W32Time service."_ – Arno Aug 21 '13 at 08:56
  • And: ... heavily loaded machine ... If you tack down the hands of a clock, you won't get decent timing as well. Even real time operating systems won't be able to react in timely manner when loaded too heavely. – Arno Aug 21 '13 at 09:05
  • hans and arno thanks. Resolution is not important, and I don't need especially high accuracy, but I need to be able to specify what the worst-case accuracy (and possibly latency) is. I think i will forget about QPC for now. Hans you say clock() is accurate to 1s per month. @arno, you give a number that is 'typically' in the area of a few ppm. this is what i need to know. do you know where i can find a resource to cite, or even to help me calculate what this timebase accuracy actually IS, in either ppm or some other quantity. thanks again – rob johnson Aug 21 '13 at 18:05
  • I doubt if `clock` is defined to be accurate to a few ppm on a Windows machine, for example. I'm not at all sure it's accurate even on a Linux machine UNLESS it's constantly calibrated using NTP daemon (which may or may not be configured in a given system). – Mats Petersson Aug 21 '13 at 18:21
2

Since you've asked for hard facts, here they are:

A typical frequency device controlling HPETs is the CB3LV-3I-14M31818 which specifies a frequency stability of +/- 50ppm between -40 °C and +85 °C. A cheaper chip is the CB3LV-3I-66M6660. This device has a frequency stability of +/- 100 ppm between -20°C and 70°C.

As you see, 50 to 100ppm will result in a drift of 50 to 100 us/s, 180 to 360 ms/hour, or 4.32 to 8.64 s/day!

Devices controlling the RTC are typically somewhat better: The RV-8564-C2 RTC module provides tolerances of +/- 10 to 20 ppm. Tighter tolerances are typically available in military version or on request. The deviation of this source is a factor of 5 less than that of the HPET. However, it is still 0.86 s/day.

All of the above values are maximum values as specified in the data sheet. Typical values may be considerably less, as mentioned in my comment, they are in the few ppm range.

The frequency values are also accompanied by thermal drift. The result of QueryPerformanceCounter() may be heavely influenced by thermal drift on systems operating with the ACPI Power Management Timer chip (example).

More information about timers: Clock and Timer Circuits.

Arno
  • 4,994
  • 3
  • 39
  • 63
0

For QPC, you can call QueryPerformanceFrequency to get the rate it updates at. Unless you are using time, you will get more than 0.5s timing accuracy anyway, but clock isn't all that accurate - quite often 10ms segments [although the apparently CLOCKS_PER_SEC is standardized at 1 million, making the numbers APPEAR more accurate].

If you do something along these lines, you can figure out how small a gap you can measure [although at REALLY high frequency you may not be able to notice how small, e.g. timestamp counter that updates every clock-cycle, and reading it takes 20-40 clockcycles]:

 time_t t, t1;

 t = time();
 // wait for the next "second" to tick on. 
 while(t == (t1 = time()))  /* do nothing */ ;

 clock_t old = 0;
 clock_t min_diff = 1000000000;
 clock_t start, end;
 start = clock();
 int count = 0;
 while(t1 == time())
 {
    clock_t c = clock();
    if (old != 0 && c != old)
    {
       count ++;
       clock_t diff;
       diff = c - old;
       if (min_diff > diff) min_diff = diff;
    }
    old = c;
}
end = clock();
cout << "Clock changed " << count << " times" << endl;
cout << "Smallest differece " << min_diff << " ticks" << endl;
cout << "One second ~= " << end - start << " ticks" << endl; 

Obviously, you can apply same principle to other time-sources.

(Not compile-tested, but hopefully not too full of typos and mistakes)

Edit: So, if you are measuring times in the range of 10 seconds, a timer that runs at 100Hz would give you 1000 "ticks". But it could be 999 or 1001, depending on your luck and you catch it just right/wrong, so that's 2000 ppm there - then the clock input may vary too, but it's much smaller variation ~ 100ppm at most. For Linux, the the clock() is updated at 100Hz (the actual timer that runs the OS may run at a higher frequency, but clock() in Linux will update at 100Hz or 10ms intervals [and it only updates when the CPU is being used, so sitting 5 seconds waiting for user input is 0 time].

In windows, clock() measures the actual time, same as your wrist watch does, not just the CPU being used, so 5 seconds waiting for user input is counted as 5 seconds of time. I'm not sure how accurate it is.

The other problem that you will find is that modern systems are not very good at repeatable timing in general - no matter what you do, the OS, the CPU and the memory all conspire together to make life a misery for getting the same amount of time for two runs. CPU's these days often run with intentionally variable clock (it's allowed to drift about 0.1-0.5%) to reduce electromagnetic radiation for EMC, (electromagnetic compatibility) testing spikes that can "sneak out" of that nicely sealed computer box.

In other words, even if you can get a very standardized clock, your test results will vary up and down a bit, depending on OTHER factors that you can't do anything about...

In summary, unless you are looking for a number to fill into a form that requires you to have a ppm number for your clock accuracy, and it's a government form that you can't NOT fill that information into, I'm not entirely convinced it's very useful to know the accuracy of the timer used to measure the time itself. Because other factors will play AT LEAST as big a role.

Mats Petersson
  • 126,704
  • 14
  • 140
  • 227
  • thanks Mats, I understand what you're saying. I need something a little more definitive than empirical tests. what i'm looking for is something more like a published timebase accuracy for "clock()" in ppm, even for the worst-case, and i don't even care if it's half a second or more, as long as it's quanitified. such a thing may well not exist because, as you imply, it's all hardware dependent and may vary on how the machine is loaded. but if i could find a definitive source to either quote or derive baseed on hardware, OS and whatever else, that would be immensely helpful. – rob johnson Aug 21 '13 at 18:10
  • Uh, I don't think there is such a thing, as there are often choices to be made on a system configuration/system initialization basis (e.g. "Is this system using chip X or chip Y. If it is chip Y, and the chip revision is C8, then we can't use it, so still use version for X", that sort of thing. And if you mean "how accurate is the `clock` vs. e.g an atomic clock for long term timekeeping", then I think the answer would have to be "not very good at all". If you KNOW that the time-base is using the 8254 timer chip, and the motherboard is brand B, perhaps you can find it... But not general... – Mats Petersson Aug 21 '13 at 18:15
  • thanks again. i'm not looking for "long term" timekeeping, just a timer on the order of 10 seconds or so. i would think there would be able to fairly confidently declare some sort of ppm accuracy for a given time range, even if it was relatively crappy. but i admit i don't have any evidence or experience to think this, i guess i'm just being (probably naively) hopeful. – rob johnson Aug 21 '13 at 23:29