3

I noticed some programmers use unsigned long for tv_sec and tv_usec [when they copy them or operate with them] of timeval while they are defined as simply long.

Though it does make me wonder why they were defined like that when time usually goes forward.

j riv
  • 3,593
  • 6
  • 39
  • 54

3 Answers3

4

Using long int for those variables will work until year 2038, and after that the tv_sec will overflow on machines where long is 4bytes.

timeval shall be defined as:

The <sys/time.h> header shall define the timeval structure that includes at least the following members:

time_t         tv_sec      Seconds. 
suseconds_t    tv_usec     Microseconds. 

You should notice that time_t type is used instead of long, but which is also a 32bit representation on some systems while there are even 64bit representations on other systems. In order to avoid overflow, time_t will probably be changed to an unsigned 32bit integer or a 64bit one.

That is why some are using unsigned long, as it will stop the overflow until year 2100+. You should use the time_t type instead, and you won't need to think about how long your program is supposed to run for in the future.

Milan
  • 15,389
  • 20
  • 57
  • 65
  • 1
    I don't buy this reason (using unsigned values to delay the Y2038 problem until 2100). By 2038 all machines will have 64bit (or larger) `long` and the problem will be gone. – R.. GitHub STOP HELPING ICE Dec 19 '10 at 14:29
  • @R.. Not all embedded and legacy systems will have that. Better safe than sorry. – Milan Dec 19 '10 at 14:32
  • `time_t` is `long` on all Unix systems, and that is 64 bits wide on 64-bit Unix systems, so it will never overflow in practice. – Philipp Dec 19 '10 at 15:19
  • 28 years is a long time, roughly 70% of the lifetime of Unix up to now. @Philipp: Unix does not require `time_t` to be `long`. It could be `long long`. Sadly binaryware compatibility is more important to most people than fixing legacy limitations, so in practice we're stuck with 32-bit `time_t` on 32-bit machines. – R.. GitHub STOP HELPING ICE Dec 19 '10 at 16:08
  • It's obvious time_t will be upgraded on anything that is newly released for 2038+. It's *time*_t. i.e. no problem for any OS that is aimed to say, desktop users, even business users with their own workstations. Unless one aims to really legacy devices, it's ugly to not use the standard method. – j riv Dec 19 '10 at 17:09
2

When unix time was invented, negative times probably made sense. Like, AT&T needed adequate timestamps for things that happened in the 1960s.

As for microseconds, if you subtract two values you can go into negative numbers with signed value, and into 4+billions with unsigned. Comparing with 0 seems more intuitive.

Dallaylaen
  • 5,268
  • 20
  • 34
  • Are you sure? My understanding is that `(time_t)-1` has always been an error indicator, in which case the use of negative time values seems dubious... – R.. GitHub STOP HELPING ICE Dec 19 '10 at 16:09
  • time(2) can only return error if you specify a pointer for returning the result, which means return value is just a success indicator. Gettimeofday does not return time_t to begin with. That said, FreeBSD's manpage specifically warns about the -1. – Dallaylaen Dec 19 '10 at 16:39
2

tv_sec has type time_t. tv_usec has type long, and needs to be signed because you will (on average 50% of the time) get negative results in tv_usec when subtracting timeval values to compute an interval of time, and you have to detect this and convert it to a borrow from the tv_sec field. The standard (POSIX) could have instead made the type unsigned and required you to detect wrapping in advance, but it didn't, probably because that would be harder to use and contradict existing practice.

There is also no reason, range-wise, for tv_usec to be unsigned, since the maximum range it really needs to be able to represent is -999999 to 1999998 (or several times that if you want to accumulate several additions/subtractions before renormalizing).

R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711