6

By default, GetTickCount and timeGetTime has the same resolution -- 15.625ms, but after I call timeBeginPeriod(1), GetTickCount still updates every 15.625 ms, while timeGetTime does update every 1ms, Why is this?

In Bug in waitable timers?, the author mentioned that:

RTC based timer

I am wondering that: Why GetTickCount and timeGetTime come from the same RTC, but there are two kinds of resolution?

thanks!

ajaxhe
  • 571
  • 1
  • 5
  • 13
  • Related reading: http://blogs.msdn.com/b/larryosterman/archive/2009/09/02/what-s-the-difference-between-gettickcount-and-timegettime.aspx – Cody Gray - on strike Mar 13 '13 at 12:56
  • I don't get this, why on Earth would you expect two *different* winapi functions to behave the same? If it was *designed* to be the same then of course there wouldn't have been any need to add a separate function. – Hans Passant Mar 13 '13 at 14:45
  • @CodyGray thanks for your information. As the author mentioned : "KeGetTickCount get tick count; KeQueryInterruptTime get interrupt time count". What is the different between tick count and interrupt time count? I am puzzled that how many based timer does windows used? – ajaxhe Mar 14 '13 at 01:17
  • @HansPassant Thanks for your rely. I know they are a different winapi, but who cause the different in the windows kernel view? – ajaxhe Mar 14 '13 at 01:23
  • A Microsoft programmer, no doubt. Hard to see why *any* of this is relevant. – Hans Passant Mar 14 '13 at 01:25
  • @HansPassant in http://forum.sysinternals.com/topic16229.html, the author mentioned: "timeGetTime, takes into account the increased resolution brought about by timeBeginPeriod, while GetTickCount, IMHO, is just counting the number of "global quantum intervals" (see above) aka "ticks" which are independent of the resolution ." What is the quantum intervals or tick mean? thanks! – ajaxhe Mar 14 '13 at 02:06
  • 1
    With that link, it looks like you have your answer. There are several great explanations posted by the user dirbase. Have you read them all? Any of those would be suitable for summarizing and posting to this question as an answer--did you know that you can submit and accept your own answer to your question? Like Hans, I wonder what problem you're trying to solve where this information is relevant. Polling is almost always the wrong solution to a problem, and there are better functions for performance profiling, so it's rare to need highly accurate results from either of these APIs. – Cody Gray - on strike Mar 14 '13 at 03:27
  • @CodyGray Maybe I got some idea from forum.sysinternals.com/topic16229.html now : "By default, the clock interrupt and timer tick are the same, but the OS or applications can change the clock interrupt period, while the timer tick period never changes." – ajaxhe Mar 16 '13 at 06:29

3 Answers3

1

I think the OP is getting confused between timers, interrupts, and timer ticks.

The quantum interval is the timer tick period. This is hardwired into the system at 18.2 ticks/sec. This never varies for any reason, and is not based on the system CPU clock (obviously!).

You can ask the system for 2 different things : the date and time, (GetTime) or the amount of time the system has been running (GetTickCount/GetTickCount64).

If you're interested in the uptime of the system, use GetTickCount. From my limited understanding, GetInterruptTime only returns the amount of time spent during real-time interrupts (as opposed to time spent running applications or idle).

I'm not sure that telling a new programmer to stop asking "why?" is going to help them. Yes, the OP hasn't seen or read the comments on the page mentioned; but asking here shouldn't be a privilege granted only to searchers who have exhausted all other avenues (possibly including the Seeing Stones of C). We ALL learn here. Stop telling people their question is pointless without telling them why. And there is no reason not to ask. Timers can be confusing!

1

Actually the table you quote is false for QueryPerformanceCounter. QPC (for short) is implemented in terms of 3 possible timing sources, which are 1: HPET, 2: PM ACPI timer, 3: RDTSC. the decision is made by heuristics depending on conditions, kernel boot options, bugs in bios and bugs in ACPI code provided by the chipset. All of these bugs are discovered on a per piece of hardware basis in Microsoft labs. Linux and BSD programmers have to find by themselves the hardway and usually must rewrite ACPI code to workaround shits. The linux community have come to hate RDTSC as well as much as ACPI for different reasons. But anyway...

The timeReadTime is different than the GetTickCount because for stability according to how the documentation specified GetTickCount made that its behavior could not be changed. However windows needed to get a better Tick resolution in some cases to allow better Timer functions. (timer works with messages send to application GetMessage or PeekMessage function and then branch in the good callback to handle the timer) This is needed for multimedia like sound/audio sync.

Obviously, game or real time programming needs better precision even sometime and cannot use timers. Instead they use busy waiting, or they sleep at only one occasion : the VSync through a call to OpenGL or DirectX uppon backbuffer/frontbuffer swapping. The video driver will wake up the waiting thread uppon VSync signal from the screen itself. Which is an event based sleep, like a timer but not based on a timer interruption.

it should be noted that modern kernels have dynamic ticking (tickless kernel, from windows 8 or linux 2.6.18). The finest frequency of tick interruption cannot be brought under 1ms to avoid to choke, but there is no upper limit. if no application is running and posting timing event, then the machine may sleep indefinitely allowing the CPU to go down to the deepest sleep state (Intel Enhanced Speed Step C7 state). After which the next wake up event, most of the time, happens because of a device interruption, mostly USB. (mouse move or other stuff)

v.oddou
  • 6,476
  • 3
  • 32
  • 63
  • *"The finest frequency of tick interruption cannot be brought under 1ms to avoid to choke, but there is no upper limit."* It can actually, you have to use `NtSetTimerResolution`. I've seen it go down to `0.496 ms`. – user541686 Jul 30 '18 at 02:04
1

For those of you curious as to why the system tick used to run at 18.2Hz here is the explanation: The original IBM PC released in 1981 had a clock speed of 4.77MHz and used an intel 8253 programable interval timer. The timer had a prescaler of 4 and for programmed with a value of 0 for the system timer which gave a count interval of 65536. So system timer frequency = (4770000 / 4) / 65536 = 18.2Hz All modern PC's still use this configuration even though the clocks are now way higher and the 8253 is now obsolete (but still implemented within the chipset).

Modern Operating systems like Windows 10 and Linux however program this timer to generate a system tick at of 1000Hz instead of the old 18.2Hz.

Rajiv
  • 11
  • 1