I searched my Linux box and saw this typedef:
typedef __time_t time_t;
But I could not find the __time_t
definition.
I searched my Linux box and saw this typedef:
typedef __time_t time_t;
But I could not find the __time_t
definition.
The time_t Wikipedia article article sheds some light on this. The bottom line is that the type of time_t
is not guaranteed in the C specification.
The
time_t
datatype is a data type in the ISO C library defined for storing system time values. Such values are returned from the standardtime()
library function. This type is a typedef defined in the standard header. ISO C defines time_t as an arithmetic type, but does not specify any particular type, range, resolution, or encoding for it. Also unspecified are the meanings of arithmetic operations applied to time values.Unix and POSIX-compliant systems implement the
time_t
type as asigned integer
(typically 32 or 64 bits wide) which represents the number of seconds since the start of the Unix epoch: midnight UTC of January 1, 1970 (not counting leap seconds). Some systems correctly handle negative time values, while others do not. Systems using a 32-bittime_t
type are susceptible to the Year 2038 problem.
[root]# cat time.c
#include <time.h>
int main(int argc, char** argv)
{
time_t test;
return 0;
}
[root]# gcc -E time.c | grep __time_t
typedef long int __time_t;
It's defined in $INCDIR/bits/types.h
through:
# 131 "/usr/include/bits/types.h" 3 4
# 1 "/usr/include/bits/typesizes.h" 1 3 4
# 132 "/usr/include/bits/types.h" 2 3 4
Standards
William Brendel quoted Wikipedia, but I prefer it from the horse's mouth.
C99 N1256 standard draft 7.23.1/3 "Components of time" says:
The types declared are size_t (described in 7.17) clock_t and time_t which are arithmetic types capable of representing times
and 6.2.5/18 "Types" says:
Integer and floating types are collectively called arithmetic types.
POSIX 7 sys_types.h says:
[CX] time_t shall be an integer type.
where [CX]
is defined as:
[CX] Extension to the ISO C standard.
It is an extension because it makes a stronger guarantee: floating points are out.
gcc one-liner
No need to create a file as mentioned by Quassnoi:
echo | gcc -E -xc -include 'time.h' - | grep time_t
On Ubuntu 15.10 GCC 5.2 the top two lines are:
typedef long int __time_t;
typedef __time_t time_t;
Command breakdown with some quotes from man gcc
:
-E
: "Stop after the preprocessing stage; do not run the compiler proper."-xc
: Specify C language, since input comes from stdin which has no file extension.-include file
: "Process file as if "#include "file"" appeared as the first line of the primary source file."-
: input from stdinThe answer is definitely implementation-specific. To find out definitively for your platform/compiler, just add this output somewhere in your code:
printf ("sizeof time_t is: %d\n", sizeof(time_t));
If the answer is 4 (32 bits) and your data is meant to go beyond 2038, then you have 25 years to migrate your code.
Your data will be fine if you store your data as a string, even if it's something simple like:
FILE *stream = [stream file pointer that you've opened correctly];
fprintf (stream, "%d\n", (int)time_t);
Then just read it back the same way (fread, fscanf, etc. into an int), and you have your epoch offset time. A similar workaround exists in .Net. I pass 64-bit epoch numbers between Win and Linux systems with no problem (over a communications channel). That brings up byte-ordering issues, but that's another subject.
To answer paxdiablo's query, I'd say that it printed "19100" because the program was written this way (and I admit I did this myself in the '80's):
time_t now;
struct tm local_date_time;
now = time(NULL);
// convert, then copy internal object to our object
memcpy (&local_date_time, localtime(&now), sizeof(local_date_time));
printf ("Year is: 19%02d\n", local_date_time.tm_year);
The printf
statement prints the fixed string "Year is: 19" followed by a zero-padded string with the "years since 1900" (definition of tm->tm_year
). In 2000, that value is 100, obviously. "%02d"
pads with two zeros but does not truncate if longer than two digits.
The correct way is (change to last line only):
printf ("Year is: %d\n", local_date_time.tm_year + 1900);
New question: What's the rationale for that thinking?
time_t
is of type long int
on 64 bit machines, else it is long long int
.
You could verify this in these header files:
time.h
: /usr/include
types.h
and typesizes.h
: /usr/include/x86_64-linux-gnu/bits
(The statements below are not one after another. They could be found in the resp. header file using Ctrl+f search.)
1)In time.h
typedef __time_t time_t;
2)In types.h
# define __STD_TYPE typedef
__STD_TYPE __TIME_T_TYPE __time_t;
3)In typesizes.h
#define __TIME_T_TYPE __SYSCALL_SLONG_TYPE
#if defined __x86_64__ && defined __ILP32__
# define __SYSCALL_SLONG_TYPE __SQUAD_TYPE
#else
# define __SYSCALL_SLONG_TYPE __SLONGWORD_TYPE
#endif
4) Again in types.h
#define __SLONGWORD_TYPE long int
#if __WORDSIZE == 32
# define __SQUAD_TYPE __quad_t
#elif __WORDSIZE == 64
# define __SQUAD_TYPE long int
#if __WORDSIZE == 64
typedef long int __quad_t;
#else
__extension__ typedef long long int __quad_t;
Under Visual Studio 2008, it defaults to an __int64
unless you define _USE_32BIT_TIME_T
. You're better off just pretending that you don't know what it's defined as, since it can (and will) change from platform to platform.
It's a 32-bit signed integer type on most legacy platforms. However, that causes your code to suffer from the year 2038 bug. So modern C libraries should be defining it to be a signed 64-bit int instead, which is safe for a few billion years.
Typically you will find these underlying implementation-specific typedefs for gcc in the bits
or asm
header directory. For me, it's /usr/include/x86_64-linux-gnu/bits/types.h
.
You can just grep, or use a preprocessor invocation like that suggested by Quassnoi to see which specific header.
What is ultimately a time_t typedef to?
Robust code does not care what the type is.
C species time_t
to be a real type like double, long long, int64_t, int
, etc.
It even could be unsigned
as the return values from many time function indicating error is not -1
, but (time_t)(-1)
- This implementation choice is uncommon.
The point is that the "need-to-know" the type is rare. Code should be written to avoid the need.
Yet a common "need-to-know" occurs when code wants to print the raw time_t
. Casting to the widest integer type will accommodate most modern cases.
time_t now = 0;
time(&now);
printf("%jd", (intmax_t) now);
// or
printf("%lld", (long long) now);
Casting to a double
or long double
will work too, yet could provide inexact decimal output
printf("%.16e", (double) now);
You could use typeid
to find out how time_t
is defined in your system.
#include <iostream> // cout
#include <ctime> // time_t
#include <typeinfo> // typeid, name
using namespace std;
int main()
{
cout << "Test 1: The type of time_t is: \t\t"
<< typeid(time_t).name() << endl;
cout << "Test 2: time_t is a signed long?: \t"
<< (typeid(time_t) == typeid(signed long) ? "true" : "false") << endl;
cout << "Test 3: time_t is an unsigned long?: \t"
<< (typeid(time_t) == typeid(unsigned long) ? "true" : "false") << endl;
return 0;
}
In the case of my system, the output is:
Test 1: The type of time_t is: l Test 2: time_t is a signed long?: true Test 3: time_t is an unsigned long?: false
time_t
is just typedef
for 8 bytes (long long/__int64
) which all compilers and OS's understand. Back in the days, it used to be just for long int
(4 bytes) but not now. If you look at the time_t
in crtdefs.h
you will find both implementations but the OS will use long long
.