0

The idea is that an existing project uses timeGetTime() (for windows targets) quite frequently.

milliseconds = timeGetTime();

Now, this could be replaced with

double tmp = (double) lpPerformanceCount.QuadPart/ lpFrequency.QuadPart; 
milliseconds = rint(tmp * 1000);

with lpPerformanceCount.QuadPart and lpFrequency.QuadPart being taken from the use of a single call to QueryPerformanceCounter() and QueryPerformanceFrequency().

I know Windows' internals are kind of voodoo, but can someone decipher which of the two is more accurate or/and has more overheads?

I suspect accuracy might be same but QueryPerformanceCounter might have less overheads. But I have no hard data to back it up.

Of course I wouldn't be surprised if the opposite is true.

If overheads are tiny in any way I would be more interested on whether there's any difference in accuracy.

j riv
  • 3,593
  • 6
  • 39
  • 54

5 Answers5

2

The accuracy of timeGetTime() is variable, based on the last used timeBeginPeriod. It will never be better than one millisecond. QueryPerformanceCounter is variable too, depending on hardware support. It will never be worse than about a microsecond.

Neither of them have notable overhead, QPC is probably a bit heavier. Whether that's significant to you is quite unclear from your question. I doubt it, but measure. With QPC.

Hans Passant
  • 922,412
  • 146
  • 1,693
  • 2,536
  • If microseconds are converted to milliseconds would it be more accurate? – j riv Aug 04 '10 at 20:51
  • Well, that's a very deep question. I'll take the high road on that one: yes. There is no way that timing code execution down to the *microsecond* level on common operating systems will ever give you an accurate value. The last 4 digits are just noise, changing constantly when you repeat the timing test over and over again. So, yes, just throwing away the noise digits gives you a more stable number. – Hans Passant Aug 04 '10 at 21:10
  • Continued: more stable. But not more accurate. The relative error is about the same, a wee bit more for timing values in milliseconds. Very wee. – Hans Passant Aug 04 '10 at 21:30
1

We have updated the documentation for QueryPerformanceCounter and this should help to answer the questions above. Please see

http://msdn.microsoft.com/en-us/library/windows/desktop/dn553408(v=vs.85).aspx

Ed Briggs Microsoft Corporation

Ed Briggs
  • 229
  • 3
  • 2
1

Be careful: QueryPerformanceCounter may be processor dependent. If your thread grabs the perf counter on one CPU, and ends up on another CPU before it grabs again, the results may not be reliable. See the MSDN entry.

Michael Kohne
  • 11,888
  • 3
  • 47
  • 79
  • That doesn't appear to have a clean solution since forcing something on 1 CPU is not good at all for performance, in common cases. – j riv Aug 04 '10 at 20:50
  • the MSDN entry you linked to says that this is only an issue with buggy HAL or BIOS. Funnily enough, "it works, unless there's a bug" is true for timegetTime as well. And for every other piece of software ever written. – jalf Aug 04 '10 at 21:37
  • 1
    @jalf: Buggy BIOSes are, unfortunately, rather common. – caf Aug 05 '10 at 00:12
  • @Lela: The problem is that the performance counters are something that low-end vendors will NOT necessarily test, and which don't show up much in production software. Therefore, bugs in them DO NOT GET FIXED. Take your chances as you will, but I avoid the perf counters on multi-core or CPU systems except for debugging (and then I'm careful). – Michael Kohne Aug 05 '10 at 01:21
1

Accuracy is better on QPC. timeGetTime is accurate within the 1-10ms range (and its resolution is no finer than 1ms), whereas QPC can give you accuracy in the microsecond range.

The overhead varies. QPC uses the best hardware timer available. That may be some lightweight one built into the CPU, or it may have to go out to the motherboard which adds significant latency. And it might be made more expensive by having to go through a driver correcting for the timer hardware being buggy.

But neither is prohibitively expensive. If you're not going to call the timer millions of times per second, the overhead is insignificant for both.

jalf
  • 243,077
  • 51
  • 345
  • 550
  • But would it be more accurate even if it's converted to milliseconds? – j riv Aug 04 '10 at 20:48
  • Maybe. Because then at least it'd give you the time to the nearest ms, which timeGetTime might not be able to do on all systems. But if you don't need the accuracy, and you don't need the resolution, and you're not calling it often enough for the performance to be critical, **why are you wasting your time worrying about which timer to use**? Every timer provided by the OS is good enough then, and you could have saved yourself several hours by *just picking a timer*. – jalf Aug 04 '10 at 21:39
0

QueryPerformanceCounter does not quite give you a time. In order to convert its values into time measures you'd have to use QueryPerformanceFrequency which is supposed to let you know at which rate the counter increments. But the frequency value is more or less an estimate. The frequency of the counter can vary with the underlaying hardware and with the version of the OS. But it shall not be considered beeing a constant. It has an offset and is sometimes acompanied by thermal drift. Saying that, I would recommend to use QueryPerformanceCounter with care.

Some still mix accuracy with granularity. QueryPerformanceCounter has finer granularity, while timeGetTime has better accuracy.

However, the fastest source is GetSystemTimeAsFileTime which returns a time value in 100ns units. But its granularity is not 100ns. Its granularity depends on the result of timeGetDevCaps and the setting of timeBeginPeriod. Setting the latter properly can result in a granularity of about 10000 which corresponds to about 1ms.

I've written some more details here.

Community
  • 1
  • 1
Arno
  • 4,994
  • 3
  • 39
  • 63