14

I have a server running in TZ=UTC and I have code like this:

time_t t = time(NULL);
struct tm tm;
gmtime_r(&t, &tm);

The question is will tm.tm_sec == 60 when the server is within a leap second?

For example, if I were in the following time span:

1998-12-31T23:59:60.00 - 915 148 800.00
1998-12-31T23:59:60.25 - 915 148 800.25
1998-12-31T23:59:60.50 - 915 148 800.50
1998-12-31T23:59:60.75 - 915 148 800.75
1999-01-01T00:00:00.00 - 915 148 800.00

would gmtime() return tm == 1998-12-31T23:59:60 for time_t = 915148800 and, once out of the leap second, return tm == 1999-01-01T00:00:00 for the same time_t?

Yuki
  • 3,857
  • 5
  • 25
  • 43
  • 4
    [Leap Second Smearing](https://developers.google.com/time/smear) may also be worth consideration. – user2864740 Feb 17 '18 at 22:27
  • @user2864740 Great, thank you for bringing this up, that would be also important to know how `gmtime` will handle this. – Yuki Feb 17 '18 at 22:35

6 Answers6

13

The short answer is, no, practically speaking gmtime_r will never fill in tm_sec with 60. This is unfortunate, but unavoidable.

The fundamental problem is that time_t is, per the Posix standard, a count of seconds since 1970-01-01 UTC assuming no leap seconds.

During the most recent leap second, the progression was like this:

1483228799    2016-12-31 23:59:59
1483228800    2017-01-01 00:00:00

Yes, there should have been a leap second, 23:59:60, in there. But there's no possible time_t value in between 1483228799 and 1483228800.

I know of two ways for a gmtime variant to return a time ending in :60:

  1. You can run your OS clock on something other than UTC, typically TAI or TAI-10, and use the so-called "right" timezones to convert to UTC (or local time) for display. See this web page for some discussion on this.

  2. You can use clock_gettime() and define a new clkid value, perhaps CLOCK_UTC, which gets around the time_t problem by using deliberately nonnormalized struct timespec values when necessary. For example, the way to get a time value in between 1483228799 and 1483228800 is to set tv_sec to 1483228799 and tv_nsec to 1000000000. See this web page for more details.

Way #1 works pretty well, but nobody uses it because nobody wants to run their kernel clock on anything other than the UTC it's supposed to be. (You end up having problems with things like filesystem timestamps, and programs like tar that embed those timestamps.)

Way #2 is a beautiful idea, IMO, but to my knowledge it has never been implemented in a released OS. (As it happens, I have a working implementation for Linux, but I haven't released my work yet.) For way #2 to work, you need a new gmtime variant, perhaps gmtime_ts_r, which accepts a struct timespec instead of a time_t.


Addendum: I just reread your question title. You asked, "Will gmtime() report 60 for seconds when the server is on a Leap Second?" We could answer that by saying "yes, but", with the disclaimer that since most servers can't represent time during a leap second properly, they're never "on" a leap second.


Addendum 2: I forgot to mention that scheme #1 seems to work better for local times -- that is, when you're calling one of the localtime variants -- than for UTC times and gmtime. Clearly the conversions performed by localtime are affected by the setting of the TZ environment variable, but it's not so clear that TZ has any effect on gmtime. I've observed that some gmtime implementations are influenced by TZ and can therefore do leap seconds in accordance with the "right" zones, and some cannot. In particular, the gmtime in GNU glibc seems to pay attention to the leap second information in a "right" zone if TZ specifies one, whereas the gmtime in the IANA tzcode distribution does not.

Steve Summit
  • 45,437
  • 7
  • 70
  • 103
  • 2
    Regarding "gmtime_r will never fill in tm_sec with 60", this is not true. See http://coliru.stacked-crooked.com/a/622da23fd57dabca. – Yuki Feb 17 '18 at 23:51
  • @Yuki Right. Looks like you got your comment in before I completed my answer. That's way #1. – Steve Summit Feb 18 '18 at 00:04
  • 1
    @SteveSummit And it illustrates what's wrong with that approach! Normally `gmtime()` shouldn't care what `$TZ` is set to; "right" time zones make for a bizarre exception. –  Feb 18 '18 at 03:20
  • The "right" timezones are horribly wrong. It's not just "normally"; `gmtime` is not even *permitted* to depend on `$TZ`. – R.. GitHub STOP HELPING ICE Feb 18 '18 at 07:12
5

The question is will tm.tm_sec == 60 when the server is within a leap second?

No. On a typical UNIX system, time_t counts the number of non-leap seconds since the epoch (1970-01-01 00:00:00 GMT). As such, converting a time_t to a struct tm will always yield a time structure with a tm_sec value between 0 and 59.

Ignoring leap seconds in time_t reckoning makes it possible to convert a time_t to a human-readable date/time without full knowledge of all leap seconds before that time. It also makes it possible to unambiguously convert time_t values in the future; including leap seconds would make that impossible, as the presence of a leap second isn't known beyond 6 months in the future.

There are a few ways that UNIX and UNIX-like systems tend to handle leap seconds. Most typically, either:

  1. One time_t value is repeated for the leap second. (This is the result of a strict interpretation of standards, but will cause many applications to malfunction, as it appears that time has gone backwards.)

  2. System time is run slightly slower for some time surrounding the leap second to "smear" the leap second across a wider period. (This solution has been adopted by many large cloud platforms, including Google and Amazon. It avoids any local clock inconsistencies, at the expense of leaving the affected systems up to half a second out of sync with UTC for the duration.)

  3. The system time is set to TAI. Since this doesn't include leap seconds, no leap second handling is necessary. (This is rare, as it will leave the system several seconds out of sync with UTC systems, which make up most of the world. But it may be a viable option for systems which have little to no contact with the outside world, and hence have no way of learning of upcoming leap seconds.)

  4. The system is completely unaware of leap seconds, but its NTP client will correct the clock after the leap second leaves the system's clock one second off from the correct time. (This is what Windows does.)

  • 1
    "converting a time_t to a struct tm will **always** yield..." this is no true, otherwise there would not be value 60 for seconds in the specification. – Yuki Feb 17 '18 at 23:43
  • 1
    "time_t is TAI", this is also not true, `time_t` is not TAI, `time_t` is Unix timestamp, those are different things. TAI is always monotonic, whereas `time_t` is not. – Yuki Feb 17 '18 at 23:45
  • @Yuki 1) This is the result of a historical mistake. Some older systems stored a leap second table _in time zone data_, which would have theoretically allowed `localtime()` to perform leap second correction. This was, thankfully, deemed insane and is no longer commonly used. –  Feb 17 '18 at 23:51
  • 1
    @Yuki 2) I think you're misunderstanding my comment about TAI. What I'm saying is that some systems use TAI for the system clock (and hence `time_t`), making leap second handling unnecessary. Since this makes conversion to UTC difficult, it's uncommon. –  Feb 17 '18 at 23:54
  • Modern Linux given a live source of leap second updates from ntpd (eg via gpsd) maintains an accurate `CLOCK_TAI`. This can be retrieved using `clock_gettime()`. The time_t you get back from that can be compared to the normal time_t to work out the current leap second offset. – bazza Feb 18 '18 at 06:40
  • `time_t` is essentially UT1 (descendant of historical GMT), up to some disagreement about how it flows near a leap second. It's definitely not TAI. – R.. GitHub STOP HELPING ICE Feb 18 '18 at 07:14
  • @R.. Since this seems to be causing a lot of confusion, I've rewritten this as "system time is set to TAI". –  Feb 18 '18 at 07:24
  • Where does system time come from? In particular, can you find any source which says that it is supplying TAI for your system? The answer to that question is No. So CLOCK_TAI is an artificial time scale which is unlikely to agree with any other system that thinks it is using CLOCK_TAI. – Steve Allen Feb 19 '18 at 20:08
2

POSIX specifies the relationship between time_t "Seconds Since the Epoch" values and broken-down (struct tm) time exactly in a way that does not admit leap seconds or TAI, so essentially (up to some ambiguity about what should happen near leap seconds), POSIX time_t values are UT1, not UTC, and the results of gmtime reflect that. There is really no way to adapt or change this that's compatible with existing specifications and existing software based on them.

The right way forward is almost certainly a mix of what Google has done with leap second smearing and a standardized formula for converting back and forth between "smeared UTC" and "actual UTC" times (and thus also TAI) in the 24-hour window around a leap second and APIs to perform these conversions.

R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711
  • 2
    It is the standard that's wrong, and it dates from a time when sorting out the issue properly was too difficult (no Internet, no GPS time sources, etc). However things have changed, and it is a soluble problem nowadays (Linux has an accurate CLOCK_TAI now, so could other OSes). The standard could be updated, and computers should keep CLOCK_TAI as their system clock. It'd mean changing a ton of software. What we do have is a move led by the USA to abolish leap seconds in UTC, meaning UTC (i.e. Civil Time) would diverge from UT1 over the decades and centuries. The UK isn't in favour of that! – bazza Feb 18 '18 at 11:38
  • 1
    Regarding CLOCK_TAI being the standard, TAI itself has been standardised for nearly 50 years. So it's not like it's a recent innovation. Essentially the necessary bodges used by the software industry when keeping TAI was too hard became accepted standards, and at no point has anyony with enough authority seen fit to question the standard. Sigh. – bazza Feb 18 '18 at 11:41
  • 1
    If you don't care about leap seconds (as most people and most programs don't), then yes, smearing is the way to go. But if you *do* care about leap seconds, trying to undo the smear probably isn't good enough, because it isn't possible to reverse the smearing perfectly. (This was discussed on the LEAPSECS mailing list; sorry I don't have a cite to the exact message that concluded this.) – Steve Summit Feb 18 '18 at 17:50
  • @SteveSummit: It is possible to reverse perfectly provided you have sufficient precision in the values you're working with. The smearing transformation Google uses is clearly invertible; it's just a matter of quantization losses, which lose at most a couple bits of nanoseconds assuming you're using `timespec` with nanoseconds. If you're using `double` for time, you really have no business complaining about any of this because you already threw away so much precision. – R.. GitHub STOP HELPING ICE Feb 19 '18 at 21:30
  • @bazza: TAI is not interesting or useful for civil use. `time_t` values in TAI have no fixed relations to calendar time without looking up leap second tables, and TAI will naturally diverge from the solar year (and thus calendar time). What **is** useful is to have a well-defined way to convert back and forth between TAI and calendar time (whatever form of calendar time you're using) so that timestamps are comparable in specialty fields where they need to be. [cont] – R.. GitHub STOP HELPING ICE Feb 19 '18 at 21:34
  • ... UTC gives us this but at the expense of calendar time having to be aware of and account for leap seconds, which breaks all sorts of invariants that normal people expect. UT1 gives us this but with an ongoing need for realignment data if you care about high level of precision (if not, no issue). "UTC with leap second smearing" gives you all the properties that are desirable for "calendar time", is just as easy to convert to/from TAI as normal UTC, and only requires any knowledge of leap-second data for the conversion (nor for any other use). – R.. GitHub STOP HELPING ICE Feb 19 '18 at 21:35
  • @R, the point is that most computer systems do not give us 100% accurate civil time, despite their use of the name "UTC". Their inaccurate representation of time is an ever moving quantity; the effect is that the epoch moves one UTC second every time there is a leap second. The various fudges around a leap seconds (including smearing) make time calculations unreliable at that moment, and permanently inaccurate if they straddle a leap second. Steve Summit's work is an excellent start at making it plausible to have a TAI system time, with accurate and updated conversions to civil time. – bazza Feb 19 '18 at 23:05
  • @bazza: "the effect is that the epoch moves one UTC second every time there is a leap second" is only true if you interpret `time_t` as SI seconds, which it is not. The epoch does not "move". `time_t` simply is not SI seconds, it's units on a calendar roughly equivalent to UT1. – R.. GitHub STOP HELPING ICE Feb 19 '18 at 23:09
  • 1
    Synchronizing a system clock with an external reference time to within 1 microsecond requires huge investment in clock hardware and telecom gear that is far more effort than hacking a system to use a workaround to handle leap seconds in UTC. Getting international regulatory agencies to redefine the calendar day by adopting UTC without leap seconds requires far more effort than hacking a system to use a time scale with no leaps. Getting general agreement on these issues requires far more effort than creating new time scales to satisfy systems that do not need precise time and frequency. – Steve Allen Feb 19 '18 at 23:20
  • For systems that do need precise time and frequency there is no help from regulators and standards bodies, and until that happens they have to hack their own solution. – Steve Allen Feb 19 '18 at 23:21
  • @R, the tick rate of time_t is a UTC second, not a UT1 second. It is defined as such by POSIX. The epoch does move; if you ask almost any library to tell you how many seconds between now and 1/1/1970 00:00:00UTC, they get it wrong, by 30-odd seconds. A computer's view of when (i.e. how long ago) the date 1/1/1970 00:00:00UTC happened differs to when UTC says it happened. Worse, that becomes more wrong every time there is another leap second. Libraries like SOFA get it right. – bazza Feb 19 '18 at 23:22
  • 1
    @bazza, effectively no. Over long periods of time POSIX has chosen to count calendar days, and those are currently mean solar days of UT1, so over long periods POSIX seconds are mean solar seconds. – Steve Allen Feb 19 '18 at 23:23
  • @SteveAllen, it's relatively easily and cheaply done with most GPS receivers. You need a real serial port so that the 1PPS signal can be used to discipline the system clock frequency. The correct use of gpsd and ntpd means that Linux can maintain an accurate CLOCK_TAI. CLOCK_REALTIME is still broken... – bazza Feb 19 '18 at 23:24
  • @bazza, Please point to a publication which demonstrates clock sync to 1 microsecond using GPS. – Steve Allen Feb 19 '18 at 23:27
  • @SteveAllen, one can indeed consider the time difference return by POSIX routines as being in UT1 seconds, but that's not completely sane given that the system clock is ticking away in UTC seconds. Confusing! – bazza Feb 19 '18 at 23:29
  • @SteveAllen: 1us is easy with a GPS receiver that provides a PPS. All you need is hardware to record the timestamp (based on a local TCXO) at the PPS edge and a decent control loop. Down to 10ns or better is possible but a lot harder to achieve. – R.. GitHub STOP HELPING ICE Feb 19 '18 at 23:35
  • @SteveAllen, apologies, we seem to be getting to within 6us this way. GPS 1PPS is generally good to 1us, by the time we've bounced up and down a software stack we're generating output signals with 6us jitter wrt the GPS's 1PPS. Quite pleased with that. – bazza Feb 19 '18 at 23:35
  • @bazza: Locally in time and space, UT1 and UTC seconds are essentially the same (i.e. not different within the precision you can measure, at least without extreme equipment). So it's largely meaningless to talk about whether POSIX time "flows in UT1" or "flows in UTC". – R.. GitHub STOP HELPING ICE Feb 19 '18 at 23:37
  • @R, yes, but computers don't give us UT1. They give us a bodged version of UTC that either goes slow or steps (maybe an hour later on Windows) when a leap second turns up. You don't need "extreme equipment" to notice that your Window's PC's time is a whole second different from a Linux PC's time. And you don't need rocket science to wonder why two successive times are the back to front. – bazza Feb 19 '18 at 23:39
  • @bazza: With smearing, they're UT1 with a maximum error under 12us, probably under 6us if I felt like working out the math. They're also UTC with a maximum error of 6us (during the 24 hours surrounding a leap second) and 0 at all other times. – R.. GitHub STOP HELPING ICE Feb 19 '18 at 23:43
  • @bazza: My comment about "extreme equipment" was very clear that it was referring to measuring the difference in rate of flow of UT1 vs UTC, not difference in current time reported by different clock systems. UT1 seconds are marginally longer than UTC seconds (== SI seconds) but the difference is extremely small. – R.. GitHub STOP HELPING ICE Feb 19 '18 at 23:44
  • @R, yes, but being "close" is still not UTC. None of this matters for day-to-day stuff, but if you're running, say, a radio protocol, it matters a lot. – bazza Feb 19 '18 at 23:45
  • @bazza: In that case all you need is a formula for conversion from whatever time system the clock is reported in to TAI. Which you have. – R.. GitHub STOP HELPING ICE Feb 19 '18 at 23:45
  • @R, re measureable difference between UT1 and UTC seconds, yes I see what you mean. – bazza Feb 19 '18 at 23:46
  • @R, yes we do have and it's great that some kind souls in the Linux world have gone to the effort to do a great job, and Steve Summit's idea of a leap second table in the kernel looks great. I just wish other OSes would catch up, and that libraries would start getting amended to take advantage and allow the software devs to not have to worry about time going backwards / slow ever again. "Time" in computers gives me belly ache. – bazza Feb 19 '18 at 23:48
  • @R, it will be interesting to see if the move to amend the formal definition of UTC to no long have leap seconds gets adopted before or after the software world starts solving the problems encountered today. I'm not going to put any money on that either way! – bazza Feb 19 '18 at 23:52
1

There is absolutely no easy answer to this. For there to be a 60 second when there is a leap second, you require 1) something in the OS to know there is a leap second due, and 2) for the C library that your using to also know about the leap second, and do something with it.

An awful lot of OSes and libraries don't.

The best I've found is modern versions of Linux kernel teamed up with gpsd and ntpd, using a GPS receiver as the time reference. GPS advertises leap seconds in its system datastream, and gpsd, ntpd and the Linux kernel can maintain CLOCK_TAI whilst the leap second is happening, and the system clock is correct too. I don't know if glibc does a sensible thing with the leap second.

On other UNIXes your mileage will vary. Considerably.

Windows is a ******* disaster area. For example the DateTime class in C# doesn't know about historical leap seconds. The system clock will jump 1 second next time a network time update is received.

bazza
  • 7,580
  • 15
  • 22
1

I read this at www.cplusplus.com about gmtime: "Uses the value pointed by timer to fill a tm structure with the values that represent the corresponding time, expressed as a UTC time (i.e., the time at the GMT timezone)".

So there's a contradiction. UTC has seconds of absolutely constant length and therefore needs leap seconds, while GMT has days of exactly 86,400 seconds of very slightly varying lengths. gmtime() cannot at the same time work in UTC and GMT.

When we are told that gmtime () returns "UTC assuming no leap seconds" I would assume this means GMT. Which would mean there are no leap seconds recorded, and it would mean that the time slowly diverges from UTC, until the difference is about 0.9 seconds and a leap second is added in UTC, but not in GMT. That's easy to handle for developers but not quite accurate.

One alternative is to have constant seconds, until you are close to a leap second, and then adjust maybe 1000 seconds around that leap second in length. It's also easy to handle, 100% accurate most of the time, and 0.1% error in the length of a second sometimes for 1000 second.

And the second alternative is to have constant seconds, have leap seconds, and then forget them. So gmtime() will return the same second twice in a row, going from x seconds 0 nanoseconds to x seconds 999999999 nanoseconds, then again from x seconds 0 nanoseconds to x seconds 999999999 nanoseconds, then to x+1 seconds. Which will cause trouble.

Of course having another clock that will return exact UTC including leap seconds, with exactly accurate seconds, would be useful. To translate "seconds since epoch" to year, month, day, hours, minutes, seconds requires knowledge of all leap seconds since epoch (or before epoch if you handle times before that). And a clock that will return guaranteed exact GMT with no leap seconds and seconds that are almost but not quite constant time.

gnasher729
  • 51,477
  • 5
  • 75
  • 98
  • If I remember correctly UTC is a time standard in GMT timezone. So " gmtime() cannot at the same time work in UTC and GMT. ", it seems like `gmtime()` does work at the same time in UTC in GMT timezone. – Yuki Feb 18 '18 at 16:16
  • 1
    @Yuki The GMT Timezone (London, Dublin, Lisbon) on a computer is nearly UTC, except that most implementations get it wrong around the time of a leap second and in calculating the number of UTC seconds between times either side of a leap second. The use of the letters "GMT" to describe this timezone on a computer is, strictly speaking, inaccurate; GMT is more like UT1, not UTC. The choice they made all those years ago to call the function `gmtime` is unfortunate, given its specification... UK Civil Time in the winter is UTC, not GMT. – bazza Feb 18 '18 at 16:45
  • 1
    In a discussion like this, there's no useful difference between the terms "UTC" and "GMT". UTC is a very precisely-defined term. "GMT" can mean one of two things: (2) Exactly the same as UTC. (2) The time zone they use in Britain in the winter time, just like "EST" is they time zone they use in New York in the wintertime. The C `gmtime` function would, indeed, more properly be named `utctime`. – Steve Summit Feb 18 '18 at 17:37
  • @gnasher729 If you want "another clock that will return exact UTC including leap seconds, with exactly accurate seconds", you can have it, *but* you cannot use `time_t` to represent its value! – Steve Summit Feb 18 '18 at 17:46
  • 1
    Also, if you're using a count of seconds since an epoch (that is, if you're using something like `time_t`), and if you want to do it right, with leap seconds, please *don't* imagine that you're doing UTC! If you're doing a strict count of seconds since an epoch, you're probably doing TAI. (And then, yes, if you want to convert from TAI to UTC year, month, day, hours, minutes, seconds, you're going to need that leap second table.) – Steve Summit Feb 18 '18 at 17:47
0

Another angle to their problem is having a library that 'know so' about leap seconds. Most libraries don't and so the answers you get from functions like gmtime are, strictly speaking, inaccurate during a leap second. Also time difference calculations often produce inaccurate results straddling a leap second. For example the value for time_t given to you at the same UTC time yesterday is exactly 86400 seconds smaller than today's value, even if there was actually a leap second.

The astronomy community has solved this. Here is the SOFA Library that has proper time routines within. See their manual (PDF), the section on timescales. If made part of your software and kept up to date (a new version is needed for each new leap second) you have accurate time calculations, conversions and display.

bazza
  • 7,580
  • 15
  • 22
  • 2
    Having a good library (such as the SOFA you cite) is important, of course, but it isn't necessarily sufficient if the timestams you're dealing with came from your Operating System, and your OS doesn't support leap seconds properly. If `gmtime` gives you an inaccurate time during a leap second, that's partly because the OS couldn't give you an accurate timestamp during that leap second, and partly because the `time_t` value that's input to `gmtime` has no way of representing a leap second. SOFA and other leapsecond-aware libraries necessarily use time representations other than `time_t`. – Steve Summit Feb 18 '18 at 17:29
  • 1
    @SteveSummit, that's all true, but the situation is better than it used to be. Linux + GPSD + NTPD + `clock_gettime(CLOCK_TAI)` + SOFA is one way of sidestepping all the problems with the standard POSIX library and the 'traditional' clock / time functions in *nix. This combination gives you correct time, correct localised representations of time, and also correct time calculations. This is a good solution if one is developing a new program in C/C++ on modern linux kernels with the right GPS hardware attached and avoid glibc's routines. It cannot fix existing software, other languages, etc. – bazza Feb 18 '18 at 20:29
  • Ah, does SOFA use `clock_gettime(CLOCK_TAI)`? You're right, that's a good path. (`CLOCK_TAI` isn't quite bulletproof yet -- see for example the puzzles in [this question](https://stackoverflow.com/questions/32652688/what-is-the-epoch-of-clock-tai) -- but it's definitely coming along.) – Steve Summit Feb 18 '18 at 20:37
  • @SteveSummit, I'm not sure; I don't think that it has any means of fetching time, it's just able to properly handle, convert, and calculate across all the major timescales. Though I believe that the values one gets back from clock_gettime(CLOCK_TAI) could be used to generate a SOFA TAI time object, after which everything else is easy. We've successfully used GPS + gpsd + ntpd to get a believable result from clock_gettime(CLOCK_TAI), though one has to wait for the GPS almanac to come in (about 12 minutes from power on!). – bazza Feb 18 '18 at 20:53
  • @SteveSummit, basing an application round SOFA does have a problem though. You have to update SOFA and recompile one's application every time a new leap second is announced; there's no automatic way of injecting leap second data into SOFA, it's all hardwired. – bazza Feb 18 '18 at 20:54
  • 2
    Yup, all those updates are a pain. (1) Personally, I think `clock_gettime(CLOCK_TAI)` should return -1 if it doesn't know taioffset. (It's even worse if the kernel doesn't know taioffset until NTP tells it there's a leap second, after which it thinks taioffset is 1.) (2) I've got an experimental kernel with the leap second table hard-compiled in, but exposed -- for reading *and* updating -- via a special file in `/proc`. But there are still a bunch of details to work out. – Steve Summit Feb 18 '18 at 21:09
  • @SteveSummit, Wow, that sounds pretty good! So, some sort of daemon (ntpd?) would keep it up to date, software libraries would use it when they're doing conversions, calculations, etc? That sounds pretty cool. With that in place, there'd no longer be any excuse for other software to do time "wrong". It'd take a while for that to filter through to languages like Java, and to be replicated on other OSes. I suppose if other OSes had a similar thing, that'd make adoption more likely. I wonder if the FreeBSD guys would do something similar, and MS / Apple... – bazza Feb 18 '18 at 22:06
  • @SteveSummit, FYI we've switched to using chrony - seems to be easier to get going. Just checking that it absorbs leap second data from gpsd. – bazza Feb 19 '18 at 23:08