59

Regarding Unix (POSIX) time, Wikipedia says:

Due to its handling of leap seconds, it is neither a linear representation of time nor a true representation of UTC.

But the Unix date command does not seem to be aware of them actually

$ date -d '@867715199' --utc
Mon Jun 30 23:59:59 UTC 1997
$ date -d '@867715200' --utc
Tue Jul  1 00:00:00 UTC 1997

While there should be a leap second there at Mon Jun 30 23:59:60 UTC 1997.

Does this mean that only the date command ignores leap seconds, while the concept of Unix time doesn't?

Campa
  • 4,267
  • 3
  • 37
  • 42
  • 9
    There's a nice recap of utc/tai at http://www.madore.org/~david/computers/unix-leap-seconds.html – loreb May 15 '13 at 14:23
  • 2
    This page explains how OSes deal/should deal with leap second information from ntp, https://www.meinbergglobal.com/english/info/leap-second.htm#os – Silver Moon Jan 28 '17 at 12:53
  • `Mon Jun 30 23:59:60 UTC 1997.` makes no sense. This is an invalid time. – Sebi2020 Mar 26 '21 at 22:12
  • 2
    IANA time zone stuff: https://www.iana.org/time-zones The current IETF leap seconds list https://raw.githubusercontent.com/eggert/tz/main/leap-seconds.list Beware! The hex encoded SHA-1 hash in that file omits leading zeros. – PM 2Ring Aug 03 '22 at 16:41

4 Answers4

42

The number of seconds per day are fixed with Unix timestamps.

The Unix time number is zero at the Unix epoch, and increases by exactly 86400 per day since the epoch.

So it cannot represent leap seconds. The OS will slow down the clock to accommodate for this. The leap seconds is simply not existent as far a Unix timestamps are concerned.

Nat
  • 3,587
  • 20
  • 22
Thomas Jung
  • 32,428
  • 9
  • 84
  • 114
  • Isn't that the definition of the *original* concept of Unix time, or anyway what currently is the TAI-based Unix time, which is `a pure linear count of seconds elapsed since 1970-01-01T00:00:00 TAI` (and a TAI day is fixed to 86400 s). ? – Campa May 14 '13 at 12:42
  • The day has 86400s in any case. The difference is if your system ignores the fact that there is a real earth in orbit or not (i.e. fixes the length of day). The second is case is the common one. – Thomas Jung May 14 '13 at 13:02
  • Ok, but UTC does not ignore the Earth orbit, and on one side you cite that Unix time is fixed to 86400 s per day, but on the other and I cite that Unix time handles leap seconds. ? – Campa May 15 '13 at 08:01
  • 3
    A unix second is allowed to be different from a "real" second. The unix day has 86400s which are actually 86401 real seconds (for days with leap seconds). Otherwise the clock is ahead like TAI: _TAI has been exactly 35 seconds ahead of UTC. The 35 seconds results from the initial difference of 10 seconds at the start of 1972, plus 25 leap seconds in UTC since 1972._ – Thomas Jung May 15 '13 at 08:05
  • 8
    @ThomasJung, Your answer is misleading / wrong. **1)** The Unix time can and **does** represent leap seconds, albeit not unambiguously. For example, UTC 1998-12-31 23:59:60.25 is represented as Unix time 915148800.25. If the number of seconds were fixed with Unix timestamps, we would have 86400 unique integer Unix timestamps per day, every day. Although Unix timestamps *increase* by 86400 per day, it doesn't mean that we *have* 86400 seconds in Unix timestamps per day. Not every real number is a valid Unix timestamp due to negative leap seconds, and not every real.. – Pacerier Jun 16 '13 at 17:31
  • 3
    ..number is a unique Unix timestamp due to positive leap seconds. **2)** Leap seconds do exist as far as Unix timestamps are concerned. If they didn't, Unix time will freeze during a leap second, which it doesn't. – Pacerier Jun 16 '13 at 19:37
  • 2
    @Campa: TAI time continues to tick during a leap seconds as always (each intercalary leap second increases [TAI - UTC difference](http://hpiers.obspm.fr/eop-pc/index.php?index=TAI-UTC_tab&lang=en). POSIX time is UTC time (excluding moments *during* leap seconds). TAI can tell actual elapsed UTC (SI) seconds, POSIX -- UT (Earth rotation) seconds (because UTC is within 0.9 seconds from UT). To find the actual number of elapsed seconds between two events given in UTC, you need to know the corresponding leap counts at each event e.g., 2010-01-01 -- 34 leap seconds, 2013-01-01 -- 35 leap seconds. – jfs Sep 15 '14 at 00:22
  • @Pacerier: Unix timestamp is *not* unique unless "leap smear" or similar technique is used. Look at [POSIX and Mills-style transitions](http://en.wikipedia.org/wiki/Unix_time#Leap_seconds). – jfs Sep 15 '14 at 00:25
  • @J.F.Sebastian, I think you @ the wrong guy because **that's exactly what I said**. – Pacerier Sep 16 '14 at 17:55
  • @Pacerier: sorry. I've misread your second comment. I've noticed *"unique Unix timestamp"* in it and stopped reading. – jfs Sep 16 '14 at 19:03
  • 2
    @Pacerier: Can you provide a reference stating that Unix time represents leap seconds? [This page](http://pubs.opengroup.org/onlinepubs/9699919799/xrat/V4_xbd_chap04.html) explicitly says it doesn't: *However, in POSIX time (seconds since the Epoch), leap seconds are ignored (not applied)*. [This page](http://pubs.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap04.html#tag_04_14) also says *each and every day shall be accounted for by exactly 86400 seconds*. – dreamlax Oct 29 '14 at 22:47
  • @dreamlax, Yes, see the table at http://en.wikipedia.org/wiki/Unix_time#leapsecondinserted . The leap second 1998-12-31T23:59:60.00 is **represented** as 915,148,800.00 in Unix time. The first page you quote does not contradict that. Their definition of "ignored" is "not applied", and their definition of "applied" is "affects the value of the subsequent second". The second page is saying that each and every day will **increase** the timestamp's seconds-count by exactly 86400. It doesn't state that leap seconds are unrepresented in Unix time, because they are. – Pacerier Oct 29 '14 at 23:11
  • 1
    @Pacerier: What you quoted shows that the same Unix timestamp is used for both 1998-12-31T23:59:60.00 as well as 1999-01-01T00:00:00.00 – dreamlax Oct 30 '14 at 00:14
  • @dreamlax, The same Unix timestamp is supposed to be used for both 1998-12-31T23:59:60.00 and 1999-01-01T00:00:00.00. Re-read the 2 points above: "The Unix time can and does represent leap seconds, **albeit not unambiguously**." "Leap seconds do exist as far as Unix timestamps are concerned. If they didn't, Unix time will freeze during a leap second, **which it doesn't.**" – Pacerier Oct 30 '14 at 06:45
  • 2
    @Jung : could you please rephrase to "So it cannot represent leap seconds _unambiguously_ " ? – Campa Apr 23 '15 at 07:25
  • 4
    Here's the definitive answer from the POSIX spec: http://pubs.opengroup.org/onlinepubs/9699919799/xrat/V4_xbd_chap04.html#tag_21_04_15 Quote: "in POSIX time (seconds since the Epoch), leap seconds are ignored (not applied)" – Kenton Varda Jun 12 '15 at 06:37
  • 1
    @KentonVarda: read [comments above yours](http://stackoverflow.com/questions/16539436/unix-time-and-leap-seconds#comment41890693_16539483) – jfs Aug 02 '15 at 19:40
  • 2
    @Pacerier "If they didn't, Unix time will freeze during a leap second, which it doesn't." - No, it doesn't freeze, it jumps backwards but to software that gets the timestamp as an integer value, it will look like a freeze as for two consecutive seconds the same integer value is reported. – Mecki Nov 14 '21 at 01:01
36

Unix time is easy to work with, but some timestamps are not real times, and some timestamps are not unique times.

That is, there are some duplicate timestamps representing two different seconds in time, because in unix time the sixtieth second might have to repeat itself (as there can't be a sixty-first second). Theoretically, they could also be gaps in the future because the sixtieth second doesn't have to exist, although no skipping leap seconds have been issued so far.

Rationale for unix time: it's defined so that it's easy to work with. Adding support for leap seconds to the standard libraries is very tricky. For example, you want to represent 1 Jan 2050 in a database. No-one on earth knows how many seconds away that date is in UTC! The date can't be stored as a UTC timestamp, because the IAU doesn't know how many leap seconds we'll have to add in the next decades (they're as good as random). So how can a programmer do date arithmetic when the length of time which will elapse between any two dates in the future isn't know until a year or two before? Unix time is simple: we know the timestamp of 1 Jan 2050 already (namely, 80 years * #of seconds in a year). UTC is extremely hard to work with all year round, whereas unix time is only hard to work with in the instant a leap second occurs.

For what it's worth, I've never met a programmer who agrees with leap seconds. They should clearly be abolished.

Nicholas Wilson
  • 9,435
  • 1
  • 41
  • 80
  • what do you mean that Unix time is *easy* to work with? In the Wikipedia page, it says that `the Unix time scale was originally intended to be a simple linear representation of time elapsed since an epoch`, but then the POSIX committee decided to sync it with UTC time scale, hence *with* leap seconds, if I understand correctly. ? – Campa May 14 '13 at 12:40
  • 4
    No, unix time never has leap seconds. It's _synced_ with UTC, that is, unix time ticks at the same moment as UTC ticks: the second has exactly the same length, and they line up. Sometimes when unix time ticks though its value goes up by two, whereas UTC only ever goes up by one. Unix time is massively, massively easier than any system with leap seconds because leap seconds are a total joke. – Nicholas Wilson May 14 '13 at 13:29
  • 6
    Sorry, stupid mistake in last comment. Third sentence should read: "Sometimes when unix time ticks though its value repeats the previous second, whereas UTC only ever goes up by one." – Nicholas Wilson May 14 '13 at 13:40
  • Do you mean Unix time acknowledges leap seconds by repeating a same tick, while UTC uses the 60-th second? – Campa May 15 '13 at 07:59
  • @NicholasWilson. UTC 1 Jan 2050 **can** be stored as a UTC timestamp right now, albeit not with 100% certainty. Leap seconds are not *random* (at least as of current knowledge), regardless of whether or not the world is indeterministic. If they were *random*, we will not be able to predict future UTC timestamps above near-zero success rate, but we can. – Pacerier Jun 16 '13 at 17:32
  • 6
    @Pacerier OK, so 1 Jan 2050, 00:00:00 is 2524629600. What's the UTC timestamp for that? No-one knows. That's a big issue for programmers: either you write a lot of code, or do some really sloppy programming (which might not even be legal, depending on any regulation the software has to comply with). Leap seconds are as good as random, in that we don't know when they'll come. We don't have any models that accurately predict the earth's wobble, we just have to wait and see each year. Of course it's deterministic, but that's no help either to programmers or the IAU. – Nicholas Wilson Jun 17 '13 at 09:51
  • @NicholasWilson, we don't have to write alot of code or do sloppy programming. Allow me to repeat: Leap seconds are not *random* (at least as of current knowledge), regardless of whether or not the world is indeterministic. If they were *random*, we will not be able to predict future UTC timestamps above near-zero success rate, but we can. What it means if if they were *random* we wouldn't know the amount of leap seconds that would occur tomorrow or even today. Even though we do not have *complete* information, we do have *some* information and thus leap seconds are ..... – Pacerier Jun 21 '13 at 02:20
  • .....not *random*. It's not like `rand()` and oh we have a leap second now, `rand()` and oh we don't. UTC is not a big issue for programmers. It is only a big issue for programmers who do not account for it, or use it without understanding it. Like everything else, *that* is a big issue, regardless of whether it's UTC or not. – Pacerier Jun 21 '13 at 02:20
  • 1
    You seem to be stuck to the idea that a *timestamp* must be a reflection of a count of seconds, though in fact http://goo.gl/QWsOo a timestamp is simply a sequence of characters or encoded information. In other words, the string "1 Jan 2050, 00:00:00" *is* in fact a UTC timestamp by itself if we can decode it to the UTC time "1 Jan 2050, 00:00:00". If you don't like alphanumerics, we can always convert this to any arbitrary encoded integer like "2498729866093" and it's still a UTC timestamp as long as the receiver........ – Pacerier Jun 21 '13 at 02:21
  • ...... can decode it to a UTC time, even when the encoded information does not directly reflect the number of seconds passed. – Pacerier Jun 21 '13 at 02:22
  • @J.F.Sebastian, That's exactly what I said. We don't know with 100% certainty the exact number of seconds we will be off by, but we know that we are **guaranteed** to be within the **range** of 50 seconds, or 25 seconds, or 10 seconds, or 5 seconds, etc. – Pacerier Sep 16 '14 at 17:55
  • @J.F.Sebastian, For a random ball, we cannot guess a smaller range than the possible range. But because leap seconds are not random (it's approximated that we get 1 positive leap second per ~1.5 yrs), we can guess a smaller range than the possible range. **Simple experiment: How many leap seconds will there be from 2050 to 2150?** The answer is 100/1.5 = ~66.6 positive leap seconds and ~0 negative leap seconds. Now if leap seconds *were* random once per yr (IERS does it twice a yr), the answer would be 33.3 positive **and 33.3 negative**. I'll................................................. – Pacerier Sep 17 '14 at 14:25
  • 1
    ..........................bet you 10 yrs of salary that we will not have anywhere near 33 *negative* leap seconds within 2050 to 2150. Do you that it's a coincidence that we have (2011-1972)/1.5 = 26 leap seconds from 1972 to 2011? No, because the speed of the Earth's rotation changes **non-randomly**, this has been so since ages and ages ago, and is exactly how scientists estimate that over a wide period, we will get **1 leap second per ~1.5 yr** for the current era. – Pacerier Sep 17 '14 at 14:26
  • @J.F.Sebastian, "random" doesn't mean what you think it means. While you are free to define "random" however way you like, when I say "random", I mean "random" as defined by [Bruce Schneier](http://goo.gl/ioPRDJ): "results that are **unpredictable** and cannot be reliably **reproduced**". Per that definition, it certainly means ~33.3 positive and ~33.3 negative seconds over a significant span due to [LLN](http://goo.gl/5zyoif). The bounds of the box is non-debatable as it has already ................................................................................‌​............................. – Pacerier Oct 29 '14 at 23:46
  • ..................................... been pre-set by IERS: 0 to 2 positive/negative leap seconds per year, no more no less. There is exactly 6 members in the set {(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (2, 0)} and due to [LLN](http://goo.gl/5zyoif), the definition of "random" as provided by Bruce Schneier requires that each member in the set occur the same number of times as each other member. The fact that no negative seconds had been introduced so far **proves** that it's likely the results are **non-random**, thus asserting my stand. – Pacerier Oct 29 '14 at 23:46
  • @J.F.Sebastian, You may start with the first error you spot. Or perhaps, there isn't even a single error which you could speak of? – Pacerier Oct 30 '14 at 06:47
  • 3
    I've deleted my comments. You win. If you want to learn, start by learning about probability distributions. – jfs Oct 30 '14 at 09:19
  • 3
    Wouldn't a typical Unux system be ignorant of the leap second, and simply slow down its clock when it finds it's ahead of the NTP server it's linked to? In that case there's never an ambiguous time stamp, just inaccurate ones for a while. – Mark Ransom Mar 29 '18 at 18:22
  • 4
    Yes, that's the obvious implementation. With the nasty consequence that for many minutes every year (while the clock slew is happening) your computer's clock is hundreds of millis out of sync with civil time. Applications that want a steady seconds hand are foiled (although they should of course be using a monotonic clock), and applications that want accurate civil time are foiled. But there's no sane alternative... Conclusion: whether you ignore or account for leap seconds, they're just bad. – Nicholas Wilson Mar 30 '18 at 00:05
  • Well I'm a programmer who agrees with leap seconds – C-Y Jan 26 '22 at 09:29
  • What if a asteroid hit earth and accelerate the spin? We will have lots of positive leap seconds. – Zhaolin Feng Aug 18 '22 at 06:06
31

There is a lot of discussion here and elsewhere about leap seconds, but it isn't a complicated issue, because it doesn't have anything to do with UTC, or GMT, or UT1, or TAI, or any other time standard. POSIX (Unix) time is, by definition, that which is specified by the IEEE Std 1003.1 "POSIX" standard, available here.

The standard is unambiguous: POSIX time does not include leap seconds.

Coordinated Universal Time (UTC) includes leap seconds. However, in POSIX time (seconds since the Epoch), leap seconds are ignored (not applied) to provide an easy and compatible method of computing time differences. Broken-down POSIX time is therefore not necessarily UTC, despite its appearance.

The standard goes into significant detail unambiguously stating that POSIX time does not include leap seconds, in particular:

It is a practical impossibility to mandate that a conforming implementation must have a fixed relationship to any particular official clock (consider isolated systems, or systems performing "reruns" by setting the clock to some arbitrary time).

Since leap seconds are decided by committee, it is not just a "bad idea" to include leap seconds in POSIX time, it is impossible given that the standard allows for conforming implementations which do not have network access.

Elsewhere in this question @Pacerier has said that POSIX time does include leap seconds, and that each POSIX time may correspond to more than one UTC time. While this is certainly one possible interpretation of a POSIX timestamp, this is by no means specified by the standard. His arguments largely amount to weasel words that do not apply to the standard, which defines POSIX time.

Now, things get complicated. As specified by the standard, POSIX time may not be equivalent to UTC time:

Broken-down POSIX time is therefore not necessarily UTC, despite its appearance.

However, in practice, it is. In order to understand the issue, you have to understand time standards. GMT and UT1 are based on the astronomical position of the Earth in the universe. TAI is based on the actual amount of time that passes in the universe as measured by physical (atomic) reactions. In TAI, each second is an "SI second," which are all exactly the same length. In UTC, each second is an SI second, but leap seconds are added as necessary to readjust the clock back to within .9 seconds of GMT/UT1. The GMT and UT1 time standards are defined by empirical measurements of the Earth's position and movement in the universe, and these empirical measurements cannot through any means (neither scientific theory nor approximation) be predicted. As such, leap seconds are also unpredictable.

Now, the POSIX standard also specifies that the intention is for all POSIX timestamps to be interoperable (mean the same thing) in different implementations. One solution is for everyone to agree that each POSIX second is one SI second, in which case POSIX time is equivalent to TAI (with the specified epoch), and nobody need contact anyone except for their atomic clock. We didn't do that, however, probably because we wanted POSIX timestamps to be UTC timestamps.

Using an apparent loophole in the POSIX standard, implementations intentionally slow down or speed up seconds -- so that POSIX time no longer uses SI seconds -- in order to remain in sync with UTC time. Reading the standard it is clear this was not what was intended, because this cannot be done with isolated systems, which therefore cannot interoperate with other machines (their timestamps, without leap seconds, mean something different for other machines, with leap seconds). Read:

[...] it is important that the interpretation of time names and seconds since the Epoch values be consistent across conforming systems; that is, it is important that all conforming systems interpret "536457599 seconds since the Epoch" as 59 seconds, 59 minutes, 23 hours 31 December 1986, regardless of the accuracy of the system's idea of the current time. The expression is given to ensure a consistent interpretation, not to attempt to specify the calendar. [...] This unspecified second is nominally equal to an International System (SI) second in duration.

The "loophole" allowing this behavior:

Note that as a practical consequence of this, the length of a second as measured by some external standard is not specified.

So, implementations abuse this freedom by intentionally changing it to something which cannot, by definition, be interoperable among isolated or nonparticipating systems. Alternatively, the implementation may simply repeat POSIX times as if no time had passed. See this Unix StackExchange answer for details on all modern implementations.

Phew, that was confusing alright... A real brain teaser!

  • You mean "In UTC, each second is an SI second, but leap seconds are subtracted* as necessary..." – Carlo Wood Jan 27 '21 at 03:04
  • I think UNIX time should be interpreted as imaginary system where the Earth magically rotates according to TAI. Leap seconds are ignored and time delta computed from UNIX timestamps doesn't match actual interval in monotonic SI compatible seconds if the interval contains leap seconds. For time periods outside leap seconds, UNIX time totally matches SI seconds. UNIX systems are logically stalled for one second during the leap second and in practice timestamps generated during the leap second will repeat the previous second. – Mikko Rantalainen May 08 '23 at 13:58
11

Since both of the other answers contain lots of misleading information, I'll throw this in.

Thomas is right that the number of Unix Epoch timestamp seconds per day are fixed. What this means is that on days where there is a leap second, the second right before midnight (the 61st second of the UTC minute before midnight) is given the same timestamp as the previous second.

That timestamp is "replayed", if you will. So the same unix timestamp will be used for two real-world seconds. This also means that if you're getting fraction unix epochs, the whole second will repeat.

X86399.0, X86399.5, X86400.0, X86400.5, X86400.0, X86400.5, then X86401.0.

So unix time can't unambiguously represent leap seconds - the leap second timestamp is also the timestamp for the previous real-world second.

B T
  • 57,525
  • 34
  • 189
  • 207
  • 3
    It also means that the phrase "the number of seconds since the epoch" is extremely misleading. Those are NOT S.I. seconds, but some kinds mysterious and not really precisely defined "POSIX seconds" whose length varies as more leap seconds are (more or less randomly) applied. – Carlo Wood Jan 27 '21 at 03:14
  • 2
    @CarloWood it's seconds since 1970. With a second being defined as exactly 1/86400 of a day. To get the exact seconds since 1970 in SI seconds you will need to add back the leap seconds as defined by UTC. It's not that hard to understand. – Gellweiler Jul 18 '22 at 05:10
  • 3
    @Gellweiler I understand it perfectly :P. I was just clarifying that when they say "the number seconds since the epoch" they are not talking about real (S.I.) seconds, but merely return "number of (fractional) days times 86400" - which fails to give the real number of seconds since some days are 86401 seconds. But yes, if you know exactly when leap seconds where added you can correct the result - so for the past they aren't "mysterious". They kinda are for the future though because I don't think it is set into stone when leap seconds will be added for all eternity. – Carlo Wood Jul 18 '22 at 10:45
  • 2
    I always find it frustrating when someone says "its not that hard to understand" when they fail to see the complications in the topic. – B T Jul 19 '22 at 23:52
  • this answer agrees with: "increment the system clock during the leap second and step the clock backward one second at the end of the leap second. This is the approach taken by the POSIX" --https://www.eecis.udel.edu/~mills/leap.html – Andrew Aug 10 '22 at 12:23
  • It is not as misleading if you consider the epoch as a function of total # of leap seconds added so far rather than as a constant. "In effect, a new timescale is reestablished after each new leap second. Thus, all previous leap seconds, not to mention the apparent origin of the timescale itself, lurch backward one second as each new timescale is established." --https://www.eecis.udel.edu/~mills/leap.html – Andrew Aug 10 '22 at 12:27