1

I know that GetTickCount() and timeGetTime() have different resolutions and that the timer resolution of timeGetTime() can be set via calls to timeBeginPeriod().

My understanding is that increasing the timer's resolution using timeBeginPeriod() reduces the system's sleep-time between successive increments to the counter behind timeGetTime().

Let's say the time-resolution of GetTickCount() is 16ms (its value is incremented by 16 every 16ms), and I have set the resolution of timeGetTime() to 1ms (its value is incremented by 1 every 1ms). My question is about the timepoint at which the tick-counter is updated. I wrote a small test-program to see what kind of lag the timer has behind tick-counter at the moment the tick-counter is incremented. With lag I mean the difference GetTickCount() - timeGetTime() right when GetTickCount() updates. E.g. a lag of 0 would mean the tick-counter is updated from 16 to 32 when the function timeGetTime() returns 32, a lag of 4 means the tick-counter is incremented from 16 to 32 when timeGetTime() returns 28. Here's the code:

#include <windows.h>
#include <iostream>
#include <vector>

int main(void) {

    // set time resolution to 1ms
    timeBeginPeriod(1);

    // measure tick counter interval for low resolution
    std::vector<int> difftime(200);
    int lasttick;

    for (int i = 0; i < 200; ++i) {
        lasttick = GetTickCount();
        while (!(GetTickCount()-lasttick)) ;
        difftime[i] = GetTickCount() - timeGetTime();
    }

    // reset timer resolution
    timeEndPeriod(1);

    // printout
    std::cout << "timediff" << std::endl;
    for (int i = 0; i < 200; ++i) {
        std::cout << difftime[i] << std::endl;
    }

    return 0;
}

What surprised me was that while the lag between the two functions is constant during one run of my program, it varies widely between repeated executions of the program. I expected the counters behind those two functions to always run in the background, so I figured the lag should be constant between executions.

At first I thought that increasing the resolution of the timeGetTime()-timer might cause this by introducing some random lag between the two, but when I leave the resolution at 1ms between executions the lag still varies between executions.

Does anybody know what mechanism causes this kind of behavior?

d0d0
  • 160
  • 8
  • 2
    The accuracy of GetTickCount() is affected the exact same way as timeGetTime() by a call to timeBeginPeriod(). And Sleep(), GetSystemTime(), etcetera. It all boils down to the same underlying kernel code, timeBeginPeriod increases the clock interrupt rate. Using a hot wait loop is a bad idea, the OS scheduler will put you in the dog house for a while after you burned through the thread quantum. timeSetEvent() or CreateTimerQueueTimer() do not have that problem. – Hans Passant Feb 15 '18 at 17:55
  • If GetTickCount() is affected by timeBeginPeriod() in the same way, why does its resolution stay at 15.6 ms no matter what I set the timer resolution to (in contrast to timeGetTime())? – d0d0 Feb 15 '18 at 18:08
  • Why do you ask this question? I'm smelling an XY problem. – zett42 Feb 16 '18 at 08:48
  • 1
    I want to find out which timer-function to compare to the result of GetMessageTime() to get the most accurate time-difference. GetMessageTime() appears to be based on GetTickCount(), so if there is a fixed lag between that and timeGetTime(), comparing GetMessageTime() to timeGetTime() would still be more accuracte than comparing it to GetTickCount(). If the lag is not fixed, however, that brings no advantage. – d0d0 Feb 16 '18 at 11:05

0 Answers0