34

I know that 10 years ago, typical clock precision equaled a system-tick, which was in the range of 10-30ms. Over the past years, precision was increased in multiple steps. Nowadays, there are ways to measure time intervals in nanoseconds. However, usual frameworks still return time with a precision of only around 15ms.

Which steps decrease the precision? How is it possible to measure in nanoseconds? Why are we still often getting worse-than-microsecond precision, for instance in .NET?

Boris Verkhovskiy
  • 14,854
  • 11
  • 100
  • 103
mafu
  • 31,798
  • 42
  • 154
  • 247
  • Maybe you are experiencing the [Microsoft Minute](http://www.userfriendly.org/cartoons/archives/99mar/19990318.html). – jww Apr 29 '18 at 19:52
  • For clock *accuracy*, I enjoyed [this podcast from Jane Street](https://signalsandthreads.com/clock-synchronization/) where they talk about how they made their data center to be within 100 microseconds of UTC. – Boris Verkhovskiy May 18 '22 at 21:39

3 Answers3

27

It really is a feature of the history of the PC. The original IBM-PC used a chip called the Real Time Clock which was battery backed up (Do you remember needing to change the batteries on these ?) These operated when the machine was powered off and kept the time. The frequency of these was 32.768 kHz (2^15 cycles/second) which made it easy to calculate time on a 16 bit system. This real time clock was then written to CMOS which was available via an interrupt system in older operating systems.

A newer standard is out from Microsoft and Intel called High Precision Event Timer which specifies a clock speed of 10MHz http://www.intel.com/hardwaredesign/hpetspec_1.pdf Even newer PC architectures take this and put it on the Northbridge controller and the HPET can tun at 100MHz or even greater. At 10Mhz we should be able to get a resolution of 100 nano-seconds and at 100MHZ we should be able to get 10 nano-second resolution.

The following operating systems are known not to be able to use HPET: Windows XP, Windows Server 2003, and earlier Windows versions, older Linux versions

The following operating systems are known to be able to use HPET: Windows Vista, Windows 2008, Windows 7, x86 based versions of Mac OS X, Linux operating systems using the 2.6 kernel and FreeBSD.

With a Linux kernel, you need the newer "rtc-cmos" hardware clock device driver rather than the original "rtc" driver

All that said how do we access this extra resolution? I could cut and paste from previous stackoverflow articles, but not - Just search for HPET and you will find the answers on how to get finer timers working

Romain Hippeau
  • 24,113
  • 5
  • 60
  • 79
  • http://en.wikipedia.org/wiki/High_Precision_Event_Timer#Problems Apparently some implementations of HPET ironically have precision issues, due to being slow to read or having drift, among other issues. Should still be fine to use in most cases though, especially for media playback/syncing (since that's what it was originally meant for). – Alex May 15 '15 at 20:43
  • The HPET specification link is broken. Here's an archived version: https://web.archive.org/web/20090204075023/http://www.intel.com/hardwaredesign/hpetspec_1.pdf – mndrix Nov 12 '21 at 14:50
5

I literally read a blog post on MSDN about this today, read it here, it covers the topic pretty well. It has an emphasis on C#'s DateTime but it's universally applicable.

Quentin
  • 914,110
  • 126
  • 1,211
  • 1,335
Chris
  • 26,744
  • 48
  • 193
  • 345
  • I just read the same. It raised the question, since Eric did not go into detail. His article is only about the basics. – mafu Apr 09 '10 at 12:28
  • @mafutrct :) Measuring time isn't an exact science, because what is time? Time is defined as the period over which events occur. The atomic clock uses an atomic resonance frequency standard as its timekeeping element making it very accurate. But computers cannot use such accurate measurements so use other methods which are less accurate. This is how, over time, clocks become out of sync. – Chris Apr 09 '10 at 12:33
  • 1
    Well yea, but that does not quite answer the question. Computers can provide ns precision for timediffs, so there should be a way to improve the ms precision we usually get. Also, I'd like to know about the ways this ns precision is already (sometimes) achieved. – mafu Apr 09 '10 at 12:41
  • 2
    Broken link. The new one is https://learn.microsoft.com/en-us/archive/blogs/ericlippert/precision-and-accuracy-of-datetime or you can use the wayback machine here https://web.archive.org/web/20100411072449/http://blogs.msdn.com/ericlippert/archive/2010/04/08/precision-and-accuracy-of-datetime.aspx – Nate Cook Feb 04 '20 at 07:30
2

Well, so far I haven't seen any PC, which would keep accurate (real) time to better than say 100 ms/day. In all my PCs in the past 40 years or so the real time is always either fast or slow and can drift as much as 2 or 3 seconds/day. The main reason for this is the accuracy of the PCs crystal oscillator (regardless of frequency), which is driving the clock circuitry. In run-in-the-mill computers those oscillators are NEVER calibrated to their nominal frequency and there is not even rudimentary compensation for frequency drift because of changing temperatures within the PC enclosure.

OH2AXE
  • 21
  • 1