3

I know that the default is 15.6 ms per tick, but some loser may change it and then change back and forth again and again, and I need to poll what the current value is to perform valid QueryPerformanceCounter synchronization.

So is there an API way to get the timer resolution?

I'm on C++ BTW.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
alemjerus
  • 8,023
  • 3
  • 32
  • 40

4 Answers4

14

Windows timer resolution is provided by the hidden API call:

NTSTATUS NtQueryTimerResolution(OUT PULONG MinimumResolution, 
                                OUT PULONG MaximumResolution, 
                                OUT PULONG ActualResolution);

NtQueryTimerResolution is exported by the native Windows NT library NTDLL.DLL.

Common hardware platforms report 156,250 or 100,144 for ActualResolution; older platforms may report even larger numbers; newer systems, particulary when HPET (High Precision Event Timer) or constant/invariant TSC are supported, may return 156,001 for ActualResolution.

Calls to timeBeginPeriod(n) are reflected in ActualResolution.

More details in this answer.

Community
  • 1
  • 1
Arno
  • 4,994
  • 3
  • 39
  • 63
  • 1
    finally a correct answer. i've also made a small github project which implements this methods in C# with p/invoke: https://github.com/tebjan/TimerTool – thalm Jan 22 '14 at 16:48
  • It shall be noted that the tool will only establish a new timer period (_Begin Timer Period_) when the new value is smaller than the actual value. The _Timer End Period_ will only show effect when no other process has established a timer period of _Current Period_. Side note: Current is not necessarily within the range of _Min_ and _Max_; e.g. systems running at 1024 interrups/s show Current: 0,9766 with Min: 15,625 and Max: 1. This is due to the lack of accuracy the parameters of ´NtQueryTimerResolution´ can hold. More details in [this](http://stackoverflow.com/a/11628374/1504523) answer. – Arno Jan 23 '14 at 12:40
5

This won't be helpful, another process can change it while you are calibrating.

This falls in the "if you can't beat them, join them" category. Call timeBeginPeriod(1) before you start calibrating. This ensures that you have a known rate that nobody can change. Getting the improved timer accuracy surely doesn't hurt either.

Do note that it is pretty unlikely that you can do better than QueryPerformanceFrequency(). Unless you calibrate for a very long time, the clock rate just isn't high enough to give you extra accuracy since you can never measure better than +/- 0.5 msec. And the timer event isn't delivered with millisecond accuracy, it can be arbitrarily delayed. If you calibrate over long periods then GetTickCount64() is plenty good enough.

Hans Passant
  • 922,412
  • 146
  • 1,693
  • 2,536
  • I need to know exactly how many CPU ticks is in single System timer tick to sync one with another. This gives ms ~300ns timer resolution. – alemjerus Jan 16 '14 at 09:27
  • What does "cpu ticks" mean? A CPU has a clock cycle, runs at a fat gigahertz. It changes dynamically on modern processors and doesn't have anything to do with QueryPerformanceCounter or timeBeginPeriod. – Hans Passant Jan 16 '14 at 09:31
  • Ok, I meant: how many QPC ticks in system timer tick. I know that default is about 47800, but I need to be sure even when system timer precision has been changed. Does not matter if i miss the change once or twice, i want microseconds after all. – alemjerus Jan 16 '14 at 09:33
  • Asked and answered. Since the system clock interrupt rate changes completely outside of your control, the number of QPC ticks per clock tick is undetermined unless your force the rate yourself. You can calibrate it with some odds that you get a consistent number, it won't be the same anymore after you start Chrome. If your code has a dependency on the interrupt rate then it has a bug that you need to fix. – Hans Passant Jan 16 '14 at 09:39
  • I don't really need it to be consistent. I just need to know what is it NOW, and couple misses is not a problem. Since main goal is to know whether specific tick change is a "real" one or just a thread slice. – alemjerus Jan 16 '14 at 09:52
  • @HansPassant: _...unlikely that you can do better than QueryPerformanceFrequency()_. Note: Newer versions of Windows do a calibration of the frequency (`QueryPerfomanceFrequency`) but they never repeat that calibration during operation. Older version (e.g. XP) even assume this to be a constant fixed value (e.g. 3579545 ticks/s) which it is not. That all gives plenty of room for improvements. A calibration can help here. – Arno Jan 21 '14 at 17:18
1

The RDTSC instruction may be used to read the CPU time-stamp counter. In most cases (if not all), this counter will change at the CPU clock rate. If you want to be picky, you can also use an instruction like CPUID to serialize instructions. Refer to the Intel manuals for more details.

You can work RDTSC against API's like QueryPerformanceCounter, et al. In other words, use RDTSC before and after a call to make measurements.

DednDave
  • 11
  • 2
-2

WINAPI function GetSystemTimeAdjustment

Wojtek Surowka
  • 20,535
  • 4
  • 44
  • 51
  • No, That's not it. It's a windows time fix solution. – alemjerus Jan 16 '14 at 09:25
  • The second argument is "Pointer to a DWORD that the function sets to the interval, counted in 100-nanosecond units, between periodic time adjustments. This interval is the time period between a systems clock interrupts." – Wojtek Surowka Jan 16 '14 at 09:33
  • Are you absolutely sure? Because logic says it could be hour or so if the time change itself is minimal? And more that than. Time adjustments may be disabled at all! – alemjerus Jan 16 '14 at 09:36
  • Adjustments may be minimal, but the second argument is "time period between system clock interrupts" - sounds like Windows tick interval. – Wojtek Surowka Jan 16 '14 at 09:39
  • _For each lpTimeIncrement period of time that actually passes, lpTimeAdjustment will be added to the time of day_. This does not at all reflect any relation with a timer resolution/interrupt period. In fact it is even independent; this independency was invisible on Windows XP but it became more and more visible with Windows VISTA / Windows 7. Only with Windows 8 / 8.1 the independency became very obvious. The only thing it tells is the adjustment gain (ratio of the two). – Arno Jan 21 '14 at 18:00