4

I think the following program should output the seconds to 1970 for the first day of every year from 1AD to 1970, preceded by the size of time_t on the system it's compiled on (CHAR_BIT is a macro so I think you can't just copy the compiled executable around and assume it's correct though in practice everything uses 8 bit chars these days).

#include <limits.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>

void do_time(int year)
{
  time_t utc;
  struct tm tp;

  memset(&tp, 0, sizeof(tp));

  tp.tm_sec = 0;
  tp.tm_min = 0;
  tp.tm_hour = 0;
  tp.tm_mday = 1;
  tp.tm_mon = 0;
  tp.tm_year = year - 1900;
  tp.tm_wday = 1;
  tp.tm_yday = 0;
  tp.tm_isdst = -1;

  printf("%d %ld\n",year, mktime(&tp));
}

int main(){
  printf("time_t is %lu bits\n",sizeof(time_t)*CHAR_BIT);
  for (int i = 1; i<1971; i++)
    do_time(i);
  exit(0);
}

However on OS X (10.11.3 15D21) this only works for years >= 1902, despite time_t being 64 bit signed. I could potentially understand if the programmers at Apple were lazy and didn't support any years before 1970, but correct behaviour going back to 1902 and then stopping looks more like an error on my part.

Camden Narzt
  • 2,271
  • 1
  • 23
  • 42
  • 1
    The OS X implementation limits to 32-bits for dates before 1970, but uses 64-bits for dates after 1970. Weird... – user3386109 Feb 09 '16 at 20:00
  • 3
    The implementation is open source: http://www.opensource.apple.com/source/Libc/Libc-997.1.1/stdtime/FreeBSD/localtime.c. The interesting part seems to be in time2sub(). One can see that mktime never returns a date before 1900: `if (yourtm.tm_year < 0) return WRONG` with the mysterious (at least for me) comment `/* Don't go below 1900 for POLA */`. That still does not fully explain your observation though ... – Martin R Feb 09 '16 at 20:10
  • This was also observed in the Apple Developer Forum: https://devforums.apple.com/message/756830#756830 (developer login required). – Martin R Feb 09 '16 at 20:22
  • Are you sure it's dates >= 1902? It might not work for 1900 because it's using `mktime` instead of `gmtime`. `mktime` is timezone-dependent and 1900/01/01 00:00:00 in your timezone may fall out of the 1900+ range for GMT. But that wouldn't explain the problem with 1901. – Kurt Stutsman Feb 09 '16 at 20:22
  • @KurtStutsman: I can confirm that mktime fails for year <= 1901 on OS X 10.11. – Martin R Feb 09 '16 at 20:24
  • Note: setting `tp.tm_wday` and `tp.tm_yday` not needed before calling `mktime()`. No harm either. – chux - Reinstate Monica Feb 09 '16 at 20:56
  • Btw, the "workaround" on OS X is to use the Core Foundation framework (CFDate, CFCalendar, ...) or Foundation (NSCalendar, NSDate, ...) – Martin R Feb 09 '16 at 21:14
  • Sadly I'm not in a position to change what library is used, so I have to keep the number of seconds between 0001-01-01 and 1970-01-01 in a const in my program since I can't rely on libc to figure it out. – Camden Narzt Feb 09 '16 at 21:24
  • 1
    @Camden Narzt determining the precise number of seconds back to `0001-01-01` is tricky concerning the evolution of calendars (Gregorian, Julian, Roman Republic) etc. Better to think of `1970-01-01` as having some seconds value(epoch) and reference time from that. IOWs, converting `time_t` to ancient dates is a can full of worms. Try [Washington's birthday](http://www.livescience.com/33022-when-is-george-washingtons-real-birthday.html) – chux - Reinstate Monica Feb 09 '16 at 21:45

2 Answers2

4

Consulting the C standard:

The range and precision of times representable in clock_t and time_t are implementation-defined. [..]

[N1570 §7.27.1/4] (emphasis mine)

And further down, regarding mktime:

The mktime function returns the specified calendar time encoded as a value of type time_t. If the calendar time cannot be represented, the function returns the value (time_t)(-1).

[N1570 §7.27.2.3/3]

As such, as long as the return value of mktime is (time_t)(-1) for the years where it's not working ... you're on your own.


Actually, IMO, the standard is a bit quiet about all of this:

[..] int tm_year; // years since 1900 [..]

[N1570 §7.27.1/4]

This could mean (positive) years since 1900, but then why use a signed integer.


As a side note: On my system (Linux 3.14.40 x86_64 glibc-2.21), I get ...

time_t is 64 bits
1 -62135600008
...
1969 -31539600
1970 -3600

Considering the work around part: You can of course look at libc implementations that are doing what you want and try to use their code (if that's possible with respect to any licences you need to obey). Here's the one my system uses.

Daniel Jour
  • 15,896
  • 2
  • 36
  • 63
  • He mentioned that the time_t is 64 bits which is sufficient to handle dates all the way back to the Big Bang. – Kurt Stutsman Feb 09 '16 at 19:54
  • @KurtStutsman That may be true, but the quote is not about values representable by the types, its about the **times** representable. Since they're implementation defined you cannot (portable) expect them to have any valid (time) range. – Daniel Jour Feb 09 '16 at 19:56
  • Dates are calculated mathematically so it would be strange to not support them to full bit-width of the return/input types. Also the spec is wrong about the 1900. It's years since 1970 not 1900. They even contradict themselves earlier in the man page. And negatives dates do definitely work in general. – Kurt Stutsman Feb 09 '16 at 19:59
  • 2
    @KurtStutsman It says *years since 1900* in the official C11 Standard. That of course doesn't imply that the years before 1970 (or any arbitrary value) must be supported. – 2501 Feb 09 '16 at 20:05
  • @2501 Sorry you are right about 1900. I thought their page was wrong, but upon searching around all the man pages say the same thing. – Kurt Stutsman Feb 09 '16 at 20:09
  • @KurtStutsman No problem, having just dealt with mktime I was also confused at first. – 2501 Feb 09 '16 at 20:11
  • @Kurt Stutsman Thought about "time_t is 64 bits which is sufficient to handle dates all the way back to the Big Bang". `time_t` need not be in units of seconds. If the units were nanoseconds then 64-bit would be good for about 600 years. Of course each OS can have its specification on this as C does not specify the units – chux - Reinstate Monica Feb 09 '16 at 20:55
  • @chux: As far as I know, OS X is POSIX compliant, and POSIX specifies that time_t is used for the time in seconds: http://pubs.opengroup.org/onlinepubs/009695399/basedefs/sys/types.h.html (and it does on OS X). – Martin R Feb 09 '16 at 21:01
  • @Martin R As the post is not tagged with a specific OS, yet with [libc], commenting about the general possibilities of `time_t` made sense as [C] does not specify seconds as implied in Kurt Stutsman comment. OTOH. your comment is useful too. – chux - Reinstate Monica Feb 09 '16 at 21:05
  • @chux: You are right. – On the other hand I wonder if the question should be tagged [osx], as it seems to be about a problem observed on OS X only. – Martin R Feb 09 '16 at 21:08
  • 1
    The extended discussion about other libc implementations has been helpful in deciding how to work around my problem, and while I am developing on OS X, I'll probably deploy on linux as well as macs. – Camden Narzt Feb 09 '16 at 21:27
0

In UNIX OS there are often 64 bit-enabled versions of time functions. OS X may have something similar though I couldn't find it with my quick searching. See 64 bit unix timestamp conversion for more information.

EDIT: I found a Mac to test this on and it does not appear to have a mktime64 function unfortunately. I did find this this library that might work as a work-around though I haven't tested it personally.

Community
  • 1
  • 1
Kurt Stutsman
  • 3,994
  • 17
  • 23