2

I'm currently curious about using the HPET timer to get microsecond resolution timing. There seems to be very little information about using this device online. I did find information that Linux provides an HPET driver and there is an example in the source demonstrating a user mode API, and an old mailing list thread that seem to indicate there is (was?) a kernel mode API for using it as well, but little documentation outside that.

So far, I have been unable to find any sort of equivalent for the Windows HPET driver. Does Windows provide and sort of interface, user mode of kernel mode for accessing and using the HPET on x86 platforms? Google is failing me here, as it seems more or so flooded with forum posts and articles inquiring about enabling/disabling HPET for performance reasons.

James Parsons
  • 6,097
  • 12
  • 68
  • 108
  • You cannot use [QueryPerformanceCounter API](https://learn.microsoft.com/en-us/windows/win32/sysinfo/acquiring-high-resolution-time-stamps)? OK, it's sometimes based on HPET, but if you just need microsecond resolution timing, then maybe you don't care from which source it comes? – Chris O Feb 06 '21 at 19:37

1 Answers1

6

Most of the reason for operating systems to exist is to abstract low level hardware details, so that software (e.g. applications) get the advantages of newer/better hardware instead of breaking every time any piece of hardware changes.

For examples; you get "files" (and don't have to care about SCSI vs. SATA vs. NVME; or FAT vs. NTFS vs. whatever else), and "sockets" (and don't have to care about wired ethernet vs. WIFI vs. infiniband vs. whatever else), and "threads" (and don't have to care much about literal CPUs) and "virtual memory" (and don't have to care about actual physical RAM).

In the same way; each OS will provide some kind of high performance/high precision timer API. This API may or may not use HPET internally (but you have no reason to care if it does or doesn't because you don't want broken code that breaks constantly).

For modern 80x86 systems; that high performance/high precision timer API will most likely use the CPU's TSC and local APIC timer (because it's better/more precise/lower overhead) and won't use HPET. For extremely old 80x86 computers it will probably use PIT (simply because better options, including HPET, don't exist in hardware). For other architectures (ARM, Sparc, PowerPC, ...) the same API will use whatever actually makes sense for that architecture.

Essentially; if any OS exists that does give direct "un-abstracted" access to the underlying HPET device; then that OS is a fragile mess that failed to do its job and should be abandoned ASAP.

For Windows; the API is in 3 parts:

a) high precision time stamps (QueryPerformanceCounter(), GetSystemTimePreciseAsFileTime()). Note that these may be deliberately "nerfed" for security reasons (to make timing side-channel attacks a little harder, because the CPU's TSC is a little too good).

b) High precision time delays (Sleep(), Waitable Timer Objects - see https://learn.microsoft.com/en-us/windows/win32/sync/waitable-timer-objects ).

c) "High enough" precision time events (SetTimer() and WM_TIMER messages - see https://learn.microsoft.com/en-us/windows/win32/winmsg/using-timers ). Note that the precision here doesn't need to be awesome (e.g. nanosecond precision) because message delivery latency (e.g. how long a message sits in a queue waiting for you to receive it while you're handling other messages) would make "excessive precision" unusable anyway.

Brendan
  • 35,656
  • 2
  • 39
  • 66
  • 1
    You might want to clarify exactly how b) and c) can be used "to get microsecond resolution timing", those just seem very odd to me. – Chris O Feb 08 '21 at 15:07