225

I searched my Linux box and saw this typedef:

typedef __time_t time_t;

But I could not find the __time_t definition.

Mark Amery
  • 143,130
  • 81
  • 406
  • 459
kal
  • 28,545
  • 49
  • 129
  • 149

11 Answers11

196

The time_t Wikipedia article article sheds some light on this. The bottom line is that the type of time_t is not guaranteed in the C specification.

The time_t datatype is a data type in the ISO C library defined for storing system time values. Such values are returned from the standard time() library function. This type is a typedef defined in the standard header. ISO C defines time_t as an arithmetic type, but does not specify any particular type, range, resolution, or encoding for it. Also unspecified are the meanings of arithmetic operations applied to time values.

Unix and POSIX-compliant systems implement the time_t type as a signed integer (typically 32 or 64 bits wide) which represents the number of seconds since the start of the Unix epoch: midnight UTC of January 1, 1970 (not counting leap seconds). Some systems correctly handle negative time values, while others do not. Systems using a 32-bit time_t type are susceptible to the Year 2038 problem.

Zeta
  • 103,620
  • 13
  • 194
  • 236
William Brendel
  • 31,712
  • 14
  • 72
  • 77
  • 7
    Note, however, that time_t values are usually only stored in memory, not on disk. Instead, time_t is converted to text or some other portable format for persistent storage. That makes the Y2038 problem to not really be a problem. –  Jan 23 '09 at 19:21
  • @Lars Wirzenius - I thought dirents contained time_ts? – Heath Hunnicutt May 20 '11 at 01:39
  • 11
    @Heath: on a specific system, where the same people create the operating system and C library, using `time_t` in the on-disk data structure may happen. However, since filesystems are often read by other operating systems, it'd be silly to define the filesystem based on such implementation-dependent types. For example, the same filesystem might be used on both 32-bit and 64-bit systems, and `time_t` might change size. Thus, filesystems are need to be defined more exactly ("32-bit signed integer giving number of seconds since the start of 1970, in UTC") than just as `time_t`. –  May 22 '11 at 08:44
  • 1
    As a note: the linked Wikipedia article has been removed, and it now redirects to list of `time.h` contents. That article links to cppreference.com but the cited content is nowhere to be found… – Michał Górny Aug 30 '12 at 21:06
  • 3
    @MichałGórny: Fixed, as long as articles aren't deleted you can always have a look at the history in order to find the correct version. – Zeta Mar 13 '13 at 07:51
  • Do you think the article meant unsigned_int? or did it really mean signed? – Noitidart Mar 28 '15 at 20:26
  • 1
    @Noitidart It meant signed, because otherwise it would be impossible to encode time before the origin point (epoch on Unix/Linux). – BlackJack Apr 13 '18 at 13:06
  • They don't count leap years in Unix timestamps. That is stupid. Linux is very poorly programmed. We need to replace this antiquated code with code that does not suck. –  Jun 25 '18 at 02:19
  • 1
    @Cale: Leap seconds, not leap years… – Ry- Jul 08 '18 at 17:13
  • 7
    -1; the claim quoted from Wikipedia that POSIX guarantees that `time_t` is signed is incorrect. http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_types.h.html dictates that various things must be a "signed integer type" or "unsigned integer type", but of `time_t` it says merely that it *"shall be an integer type"*. An implementation can make `time_t` unsigned and still be POSIX-compliant. – Mark Amery Apr 06 '19 at 19:30
  • Interestingly, https://pubs.opengroup.org/onlinepubs/9699919799/functions/time.html which covers `time_t` in more detail does _not_ require that it be an integral type and explicitly notes that they're leaving their options open for a future revision to either specify 64-bit integral or require that a specific value be representable. This looks like an internal inconsistency in POSIX. – Phil P Feb 17 '20 at 16:03
  • @PhilP Nothing in the documentation of the `time` function is inconsistent with the definition of `time-t` being some sort of integer type. The future directions leaves open whether a future version of POSIX will require `time_t` to have a specific size, or just a size that can represent a certain minimum date. – chepner Mar 08 '20 at 23:16
  • It's not inconsistent with it being integral, it's inconsistent as to whether or not integral is mandatory. – Phil P Mar 10 '20 at 00:47
123

[root]# cat time.c

#include <time.h>

int main(int argc, char** argv)
{
        time_t test;
        return 0;
}

[root]# gcc -E time.c | grep __time_t

typedef long int __time_t;

It's defined in $INCDIR/bits/types.h through:

# 131 "/usr/include/bits/types.h" 3 4
# 1 "/usr/include/bits/typesizes.h" 1 3 4
# 132 "/usr/include/bits/types.h" 2 3 4
Quassnoi
  • 413,100
  • 91
  • 616
  • 614
  • 1
    I see both `typedef __int32_t __time_t;` and `typedef __time_t time_t;` in a `FreeBSD freebsd-test 8.2-RELEASE-p2 FreeBSD 8.2-RELEASE-p2 #8: Sun Aug 7 18:23:48 UTC 2011 root@freebsd-test:/usr/obj/usr/src/sys/MYXEN i386`. Your results are explicitly set like that in Linux (at least on 2.6.32-5-xen-amd64 from Debian). – ssice May 10 '12 at 17:21
  • 1
    @Viet also possible with an one-liner without creating a file: http://stackoverflow.com/a/36096104/895245 – Ciro Santilli OurBigBook.com Mar 18 '16 at 23:48
  • 1
    Why grep for `__time_t` and not `time_t` to find the underlying type of `time_t`? Omitting a step? – chux - Reinstate Monica Aug 24 '19 at 16:12
  • @chux-ReinstateMonica - The OP said that he had found the typedef from time_t to __time_t. This answer is just addressing the question asked of what __time_t is defined as. But I agree that for a generic case (where time_t may not be typedeffed to __time_t), you would need to grep for time_t first, and then possibly grep again for what that returns – Michael Firth May 28 '20 at 09:10
  • @MichaelFirth Fair enough. I recall my concern as even though OP found `typedef __time_t time_t;`, examination of the surrounding code is also needed to insure that typedef was in fact used and not only part of a conditional compile. `typedef long time_t;` may have been found too. – chux - Reinstate Monica May 28 '20 at 14:20
49

Standards

William Brendel quoted Wikipedia, but I prefer it from the horse's mouth.

C99 N1256 standard draft 7.23.1/3 "Components of time" says:

The types declared are size_t (described in 7.17) clock_t and time_t which are arithmetic types capable of representing times

and 6.2.5/18 "Types" says:

Integer and floating types are collectively called arithmetic types.

POSIX 7 sys_types.h says:

[CX] time_t shall be an integer type.

where [CX] is defined as:

[CX] Extension to the ISO C standard.

It is an extension because it makes a stronger guarantee: floating points are out.

gcc one-liner

No need to create a file as mentioned by Quassnoi:

echo | gcc -E -xc -include 'time.h' - | grep time_t

On Ubuntu 15.10 GCC 5.2 the top two lines are:

typedef long int __time_t;
typedef __time_t time_t;

Command breakdown with some quotes from man gcc:

  • -E: "Stop after the preprocessing stage; do not run the compiler proper."
  • -xc: Specify C language, since input comes from stdin which has no file extension.
  • -include file: "Process file as if "#include "file"" appeared as the first line of the primary source file."
  • -: input from stdin
Ciro Santilli OurBigBook.com
  • 347,512
  • 102
  • 1,199
  • 985
12

The answer is definitely implementation-specific. To find out definitively for your platform/compiler, just add this output somewhere in your code:

printf ("sizeof time_t is: %d\n", sizeof(time_t));

If the answer is 4 (32 bits) and your data is meant to go beyond 2038, then you have 25 years to migrate your code.

Your data will be fine if you store your data as a string, even if it's something simple like:

FILE *stream = [stream file pointer that you've opened correctly];
fprintf (stream, "%d\n", (int)time_t);

Then just read it back the same way (fread, fscanf, etc. into an int), and you have your epoch offset time. A similar workaround exists in .Net. I pass 64-bit epoch numbers between Win and Linux systems with no problem (over a communications channel). That brings up byte-ordering issues, but that's another subject.

To answer paxdiablo's query, I'd say that it printed "19100" because the program was written this way (and I admit I did this myself in the '80's):

time_t now;
struct tm local_date_time;
now = time(NULL);
// convert, then copy internal object to our object
memcpy (&local_date_time, localtime(&now), sizeof(local_date_time));
printf ("Year is: 19%02d\n", local_date_time.tm_year);

The printf statement prints the fixed string "Year is: 19" followed by a zero-padded string with the "years since 1900" (definition of tm->tm_year). In 2000, that value is 100, obviously. "%02d" pads with two zeros but does not truncate if longer than two digits.

The correct way is (change to last line only):

printf ("Year is: %d\n", local_date_time.tm_year + 1900);

New question: What's the rationale for that thinking?

oHo
  • 51,447
  • 27
  • 165
  • 200
pwrgreg007
  • 313
  • 3
  • 13
6

time_t is of type long int on 64 bit machines, else it is long long int.

You could verify this in these header files:

time.h: /usr/include
types.h and typesizes.h: /usr/include/x86_64-linux-gnu/bits

(The statements below are not one after another. They could be found in the resp. header file using Ctrl+f search.)

1)In time.h

typedef __time_t time_t;

2)In types.h

# define __STD_TYPE     typedef  
__STD_TYPE __TIME_T_TYPE __time_t;  

3)In typesizes.h

#define __TIME_T_TYPE       __SYSCALL_SLONG_TYPE  
#if defined __x86_64__ && defined __ILP32__  
# define __SYSCALL_SLONG_TYPE   __SQUAD_TYPE  
#else
# define __SYSCALL_SLONG_TYPE   __SLONGWORD_TYPE
#endif  

4) Again in types.h

#define __SLONGWORD_TYPE    long int
#if __WORDSIZE == 32
# define __SQUAD_TYPE       __quad_t
#elif __WORDSIZE == 64
# define __SQUAD_TYPE       long int  

#if __WORDSIZE == 64
typedef long int __quad_t;  
#else
__extension__ typedef long long int __quad_t;
Sibren
  • 1,068
  • 11
  • 11
abcoep
  • 577
  • 9
  • 14
6

Under Visual Studio 2008, it defaults to an __int64 unless you define _USE_32BIT_TIME_T. You're better off just pretending that you don't know what it's defined as, since it can (and will) change from platform to platform.

Eclipse
  • 44,851
  • 20
  • 112
  • 171
  • 2
    That usually works, but if your program is meant to keep track of things that will happen 30 years from now, it's pretty important that you *not* have a signed 32-bit time_t. – Rob Kennedy Jan 23 '09 at 00:35
  • 4
    @Rob, bah, leave it! We'll just start running around like headless chickens in 2036, the same as we did for Y2K. Some of us will make a bucketload of money from being Y2k38 consultants, Leonard Nimoy will bring out another hilarious book about how we should all go and hide in the forest... – paxdiablo Jan 23 '09 at 00:40
  • 1
    ... and it'll all blow over, the public wondering what all the fuss was about. I may even come out of retirement to make some money for the kids' inheritance :-). – paxdiablo Jan 23 '09 at 00:40
  • 2
    BTW, we only found one Y2K bug and that was a web page which listed the date as Jan 1, 19100. Exercise for the reader as to why... – paxdiablo Jan 23 '09 at 00:42
  • 9
    If the event to happen in 30 years is "expire this backup," then you might be in trouble NOW, not in 2038. Add 30 years to today's 32-bit time_t, and you get a date in the past. Your program looks for events to process, finds one that's overdue (by 100 years!), and executes it. Oops, no more backup. – Rob Kennedy Jan 23 '09 at 05:05
  • Y2K was a non-event *because* of the efforts of programmers and engineers working specifically to avoid it. If there had been a "nothing will go wrong" attitude about it then, it *would* have had disastrous results. But also remember there were far fewer "computers" then: "smart" appliances and the Internet of Things weren't happening. – bonsaiviking Oct 10 '17 at 16:10
  • Given the OP specifically tagged the question with Linux, a VS2008 answer is probably not what they were wanting – Michael Firth May 28 '20 at 09:32
4

It's a 32-bit signed integer type on most legacy platforms. However, that causes your code to suffer from the year 2038 bug. So modern C libraries should be defining it to be a signed 64-bit int instead, which is safe for a few billion years.

3

Typically you will find these underlying implementation-specific typedefs for gcc in the bits or asm header directory. For me, it's /usr/include/x86_64-linux-gnu/bits/types.h.

You can just grep, or use a preprocessor invocation like that suggested by Quassnoi to see which specific header.

Community
  • 1
  • 1
poolie
  • 9,289
  • 1
  • 47
  • 74
1

What is ultimately a time_t typedef to?

Robust code does not care what the type is.

C species time_t to be a real type like double, long long, int64_t, int, etc.

It even could be unsigned as the return values from many time function indicating error is not -1, but (time_t)(-1) - This implementation choice is uncommon.

The point is that the "need-to-know" the type is rare. Code should be written to avoid the need.


Yet a common "need-to-know" occurs when code wants to print the raw time_t. Casting to the widest integer type will accommodate most modern cases.

time_t now = 0;
time(&now);
printf("%jd", (intmax_t) now);
// or 
printf("%lld", (long long) now);

Casting to a double or long double will work too, yet could provide inexact decimal output

printf("%.16e", (double) now);
chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256
  • I'm in a need to know situation because i need to transfer time from an ARM system to an AMD64 system. time_t is 32bits on the arm, and 64bits on the server. if I translate the time into a format, and send a string, it's inefficient and slow. Therefore it's much better to just send the entire time_t and sort it out on the server end. However, I need to understand the type a bit more because I don't want the number to get mangled up by differing endianness between systems, so i need to use htonl... but first, on a need to know basis, I want to find out the underlying type ;) – Owl Feb 14 '19 at 17:00
  • 1
    Another "need to know" case, at least for signed vs unsigned is whether you need to take care when subtracting times. If you just "subtract and print the result", then you will possibly get what you expect on a system with a signed time_t, but not with an unsigned time_t. – Michael Firth May 28 '20 at 09:31
  • @MichaelFirth Cases exist for both integer signed time_t and unsigned time_t where raw subtraction will result in unexpected results. C provides `double difftime(time_t time1, time_t time0)` for a uniform subtraction approach. – chux - Reinstate Monica May 28 '20 at 14:30
0

You could use typeid to find out how time_t is defined in your system.

#include <iostream> // cout
#include <ctime>    // time_t
#include <typeinfo> // typeid, name

using namespace std;

int main()
{
    cout << "Test 1: The type of time_t is: \t\t" 
         << typeid(time_t).name() << endl;
    cout << "Test 2: time_t is a signed long?: \t"
         << (typeid(time_t) == typeid(signed long) ? "true" : "false") << endl;
    cout << "Test 3: time_t is an unsigned long?: \t" 
         << (typeid(time_t) == typeid(unsigned long) ? "true" : "false") << endl;
    return 0;
}

In the case of my system, the output is:

Test 1: The type of time_t is:          l
Test 2: time_t is a signed long?:       true
Test 3: time_t is an unsigned long?:    false
programmar
  • 670
  • 2
  • 8
  • 18
-4

time_t is just typedef for 8 bytes (long long/__int64) which all compilers and OS's understand. Back in the days, it used to be just for long int (4 bytes) but not now. If you look at the time_t in crtdefs.h you will find both implementations but the OS will use long long.

eeerahul
  • 1,629
  • 4
  • 27
  • 38
  • 5
    all compilers and OSes? No. On my linux system the compiler takes the 4 bytes signed implementation. – Vincent Feb 17 '14 at 13:51
  • On Zynq 7010 systems time_t is 4bytes. – Owl Feb 14 '19 at 17:02
  • 1
    On the embedded systems I work on time_t is almost always 32-bits or 4 bytes. The standard specifically states that it is implementation specific which makes this answer just wrong. – Cobusve Jan 07 '20 at 17:09