4

Morning!

I'm trying to get a nanosecond timestamp (uint64_t) from a timeval struct.

    struct timeval t;
    // initialized so that ....
    print("t sec: %ld, micro sec: %ld", t.tv_sec, t.tv_usec);
    // .... prints "t sec: 4, micro sec: 130728"

    long int sec = (long int) t.tv_sec;
    // just did that because I thought that using time_t in calculations may cause errors
    // print("%ld", sec) -> 4, so this seems to work as expected    

    uint64_t t_int = (sec * 1000000 + t.tv_usec) * 1000; // should be nanoseconds
    print("t_int (uint64_t) %llu", t_int);
    // prints "t_int (uint64_t) 18446744073545312320"

My timevals should definitely fit into uint64_t. Also, the cast to long int from time_t works, so all I'm doing is a simple calculation with two long ints, putting the result into a uint64_t, so I'm not sure why I'm getting an obviously wrong value. Any ideas? Is this just a wrong way of formatting (with %llu)?

Noel93
  • 135
  • 11
  • 3
    What size is `long int` on your machine? What value do you get for `sec`? Are you aware that your calculation `sec * 1000000` is done using `long int`, not `uint64_t`? As well as all other calculations in that snippet. – Gerhardh Apr 21 '20 at 14:54
  • 1
    `printf("t_int (uint64_t) %"PRIu64,t_int);` code with the correct formatter for uint64_t. See [here](https://stackoverflow.com/questions/16859500/mmh-who-are-you-priu64). – Gerhard Apr 21 '20 at 14:57

2 Answers2

3

Either:

uint64_t t_int = ((uint64_t)t.tv_sec * 1000000 + t.tv_usec) * 1000;

or:

uint64_t t_int = (t.tv_sec * 1000000ULL + t.tv_usec) * 1000;

The latter skips the cast and instead performs the promotion by having the right-hand operand of the * already be a type that's at least 64-bit (instead of casting the left-hand side up to one). Either works; it's a matter of what style you prefer.

Do not go through a temp var of type long like you did. long may be a 32-bit type and will not fit times past January 2038. If you need a temp var, make it time_t or int64_t or similar. But it's not needed.

Also you may want to consider using int64_t (and changing the ULL to LL) instead of uint64_t, so that you can represent times before 1970.

R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711
  • "consider using `int64_t`" is a good idea as [specified](https://pubs.opengroup.org/onlinepubs/7908799/xsh/systypes.h.html) `.tv_usec` and certainly `.tv_sec` are signed quantities. – chux - Reinstate Monica Apr 21 '20 at 15:24
  • Thanks, worked flawlessly. I'll keep it unsigned though, since I'm using this to just count time from a start to a stop time. – Noel93 Apr 21 '20 at 15:28
2

You are calculating with long int width. Because it overflows undefined behavior happens and some negative value is assigned to uint64_t which results in a big positive value (on your architecture). Calculate with uint64_t or unsigned long long precision. To be pedantic, cast all values:

uint64_t t_int = ((uint64_t)sec * 1000000ull + (uint64_t)t.tv_usec) * 1000ull; 

But because of promotions most probably just casting one value to uint64_t is enough:

uint64_t t_int = ((uint64_t)sec * 1000000 + t.tv_usec) * 1000;
KamilCuk
  • 120,984
  • 8
  • 59
  • 111
  • 3
    Even storing `long int sec = (long int) t.tv_sec;` to begin with is a bug. It will overflow Y2038 on implementations with 32-bit long. Instead you want to just use `t.tv_sec` directly, or make the useless temp var `sec` have type `time_t` if you insist on having it. – R.. GitHub STOP HELPING ICE Apr 21 '20 at 15:07