4

Consider a machine whose time is smeared during a leap second with a noon-to-noon linear smear.

I'm wondering how the system clock provides accurate Epoch time during the smear period.

Example:

  • The leap second is scheduled at 31st dec of 2016.

  • On the machine, a Unix timestamp at 11:59:00 on 31st of December is 1483185540

  • At noon the smearing starts, which means the local clock of the system at 1:30 pm is already a few microseconds behind TAI and UTC. The Epoch timestamp should be 1483191000 (exactly 1 hour 31 minutes later), which is not accurate to TAI/UTC anymore since Epoch doesn't respect leap seconds
  • At 12pm UTC adds an extra second: 11:59:60 pm, the local smeared clock should continue normally
  • Till, at noon 1st of January global UTC and local UTC sync up again, the local Epoch clock is now an entire second behind global Epoch/TAI

How is this inaccuracy resolved? Does the local Epoch time skip a second once the system knows a leap second happened? Or how is this issue handled?
Does it depend on the implementation of the clock used to calculate the time? If so, how does GNU's coreutils date handle this?

Nicolai Schmid
  • 1,022
  • 1
  • 14
  • 29

1 Answers1

7

The inaccuracy is not resolved. The Unix Time remains a count of seconds since 1970-01-01 00:00:00 UTC excluding the inserted leap seconds. This has the benefit of making the count of seconds easy to convert to {year, month, day, hour, minute, second} form.

It has the problem that the subtraction of two Unix Time time points that straddle a leap second insertion will result in a time duration that is one second less than reality.

Howard Hinnant
  • 206,506
  • 52
  • 449
  • 577
  • So, just to understand it correctly: on the system the Epoch time is not seconds since 1970-1-1 but rather since 1970-1-1 + 1 second, which will result in inaccuracies calculating aforementioned diffs with "realworld" Epoch timestamps? I chose Epoch to avoid leap conflicts and inaccuracies, do I have to add the constraint that local clocks cannot be smeared to provide perfectly accurate timing and diffs? – Nicolai Schmid Jul 02 '19 at 19:14
  • 2
    Some people interpret the smear as an epoch shift. I do not. I know of no API that does not format a Unix Time of 0s as 1970-01-01 00:00:00 UTC. If there was an epoch shift, the formatter would have to take that into account and format 0s as 1970-01-01 00:00:27 UTC. – Howard Hinnant Jul 02 '19 at 19:25
  • 2
    Computer systems have monotonic clocks which can be used to avoid the timing inaccuracies of leap second smears. Typically these monotonic clocks have no relationship whatsoever to the civil calendar. They are much like a hand held stopwatch: good for timing stuff, not good for telling time. – Howard Hinnant Jul 02 '19 at 19:27
  • @Nicolai Schmid , Unix timestamp which is essentially the same as Posix timestamp always counts from 00:00 1 January 1970 and always excludes leap seconds from the count. Most operating systems and programming language libraries, when they make the Posix timestamp or any other similar count-of-seconds timestamp available to an executing process, give a value that excludes leap seconds. When you write " I chose Epoch to avoid leap conflicts and inaccuracies" that phrase can't be understood. You would have to give a code example, giving the details of the language version and operating system. – Gerard Ashton Jul 03 '19 at 15:37
  • Then let me elaborate; in general, I prefer Epoch timestamps over UTC timestamps because I don't have to worry about incorrect time interval lengths between two stamps, since Epoch doesn't have a concept of leap seconds But now, with leap-smeared clocks, the OS and most libraries don't have an account of the occured leap second so my Epoch timestamp is off by one second compared to TAI time. That's the problem I was referring to. – Nicolai Schmid Jul 03 '19 at 16:11
  • Fwiw, I've proposed a solution to the problem @NicolaiSchmid mentions for the new C++2a `` library. There will be an array of clock/time_point sets that the user can choose from, including the classic Unix Time, including a true UTC clock that doesn't ignore leap seconds, and even including clocks based on the TAI and GPS standards. The existing `` library also supports a monotonic clock for those timestamps that don't require a relationship with the civil calendar. – Howard Hinnant Jul 03 '19 at 16:28
  • depends "correct" is add :60 timstamp for each + leap sec and think in terms of different time scales ("UTC timescale has marched backward relative to the TAI timescale exactly one second on scheduled occasions recorded in the institutional memory" [er, "in the" iers.org] -Dr. Mills), one for each leap. Smearing spreads out the leap since virtually all human clocks don't have :60.... – Andrew Aug 11 '21 at 12:22
  • ...NTP'd systems w/ no :60 will either slowly adjust or step the clock per NTP rules, or add an extra :00 or :59 (e.g. in logs). Either delta times (e.g. "uptime") or historic timestamps (e.g. "boottime") or the manual math for those will will be off by one sec. – Andrew Aug 11 '21 at 12:22