2

I'd like to get the current number of nanoseconds since midnight, with the lowest latency.

My platform is Linux/Centos 7 with Clang. I do not care about portability.

I found this <chrono> struct, but they are dividing by seconds/milliseconds etc to get the result.

I also found this which could be modified for nanoseconds:

struct timeval tv;
int msec = -1;
if (gettimeofday(&tv, NULL) == 0)
{
    msec = ((tv.tv_sec % 86400) * 1000 + tv.tv_usec / 1000);
}

https://stackoverflow.com/a/10499119/997112

but again they are using a division. Is there anything quicker, avoiding modulus and divisions?

I would assume the fastest way would be:

  • Get the time now
  • Multiple number of hours, minutes seconds by necessary nanoseconds and then add the current number of nanos to the total

?

John Kugelman
  • 349,597
  • 67
  • 533
  • 578
user997112
  • 29,025
  • 43
  • 182
  • 361
  • I remember the last time I did this, there was a kernel configuration option to service `gettimeofday()` entirely in user-space from the CPU's timebase register. If you're worried about the division, perhaps you could just store the raw value and process it later? – marko Nov 17 '19 at 22:01
  • 1
    Isn't integer division pretty fast these days? Why are you so worried about it? – Nicol Bolas Nov 17 '19 at 22:02
  • @marko I'd use rdtsc() but I'm comparing now() with a packet timestamp which is nanoseconds since midnight. – user997112 Nov 17 '19 at 22:03
  • @NicolBolas maximum of 60 cycles is needlessly expensive. I'm receiving hundreds of thousands of messages per second. – user997112 Nov 17 '19 at 22:04
  • 1
    @user997112: And the time spent dealing with each of those messages will likely dwarf an integer division. Why do you specifically need nanoseconds anyway? – Nicol Bolas Nov 17 '19 at 22:07
  • @NicolBolas Because that is the granularity which makes sense and the granularity of the other timestamp I am given – user997112 Nov 17 '19 at 22:09
  • 1
    @user997112: Also, is it important that it is the time since midnight? Because "midnight" can change whenever the user resets the system clock. Or changes time zones. And FYI: `gettimeofday` will *definitely* take longer than 60 cycles. – Nicol Bolas Nov 17 '19 at 22:11
  • @NicolBolas The other timestamp I am given is in nanoseconds since midnight. The user will not change timezones, nor reset the system clock. – user997112 Nov 17 '19 at 22:12
  • @user997112: My overall point is that you can more effectively optimize your algorithm by not trying to give each message the *exact* time it arrived, but merely the *approximate* time it arrived. You can still express it in "nanoseconds" if you like, but the actual resolution could be microseconds or even milliseconds. That way, the performance cost of getting the time is essentially irrelevant. Getting the time is not cheap, regardless of units, when that time is in any way related to the system time. – Nicol Bolas Nov 17 '19 at 22:17
  • @uneven_mark Were you referring to my pseudo code bullet points at the end? – user997112 Nov 17 '19 at 22:21
  • @NicolBolas Milliseconds is way way way too much. Microseconds might be acceptable but I don't see how this will reduce the latency of taking the timestamp? – user997112 Nov 17 '19 at 22:23
  • @user997112: Because you won't be asking for the system time for every message. You only do it if Y number of CPU ticks has passed since the last time you asked for the system time or something like that. – Nicol Bolas Nov 17 '19 at 22:25
  • `struct timespec` provides nanoseconds by default with `clock_gettime`. – David C. Rankin Nov 17 '19 at 22:45
  • What kernel version are you running? What is your target architecture/board, if any? Do you have HPET available on your target board? – KamilCuk Nov 17 '19 at 23:40
  • why not use rdtsc ? you'd have to periodically callibrate (possibly in another less-critical thread), and, beware of all the dangers of using rdtsc (eg power save modes, etc) – Darren Smith Nov 18 '19 at 00:10
  • Are you sure that the actual resolution is really precise to the nanosecond? Try to ask time in a loop and see the result you get? And have you prove that the actual computation is really too slow to begin with? – Phil1970 Nov 18 '19 at 00:21
  • @Phil1970 the timestamp I receive is accurate to nanoseconds. I need to calculate the time I receive it (in nanoseconds) and subtract the two. The two sources are NTP/PTP synced. – user997112 Nov 18 '19 at 01:25

1 Answers1

2

There isn't any hardware that provides a nanoseconds counter; therefore hardware that provides something else (e.g. "CPU cycles") must be used and scaled by software somewhere.

The clock_gettime() function on Linux will scale to nanoseconds for you. More importantly (depending on security vs. performance compromises) this may be done purely in user-space, avoiding the overhead of calling the kernel API (which is likely to be at least 10 times more expensive than a measly division).

However; at these scales you need to be extremely specific about what you actually want. For example; what is expected during leap seconds? 2 computers can disagree simply because one is configured to smear leap seconds and the other isn't.

For another example; if you want to calculate latency (e.g. like "latency = current_time_at_receiver - time_packet_says_it_was_sent") then 2 computers can be out of sync (e.g. the sender's clock being a few seconds behind the receiver's, so latency ends up being negative); and to deal with that you'll probably need a training phase (a bit like the NTP protocol) where you try to estimate the initial difference between the 2 computers' time sources, followed by monitoring/tracking (to try to compensate for any long term drift).

Brendan
  • 35,656
  • 2
  • 39
  • 66
  • Both machines are NTP or PTP-synched. I have the number of nanoseconds since midnight in the packet and I need the number of nanoseconds on the receiver. – user997112 Nov 18 '19 at 01:19
  • "latency = current_time_at_receiver - time_packet_says_it_was_sent" this is exactly what I would like – user997112 Nov 18 '19 at 01:21
  • @user997112: In that case there's an unavoidable dilemma involved - the accuracy of any kind of synchronization (e.g. PTP) depends on networking latency; and you can't measure network latency with more accuracy than the network's latency allows. – Brendan Nov 18 '19 at 01:43
  • @user997112: Depending on your goals (e.g. for bench-marking purposes, especially for "request and response" protocols) "round trip time / 2" (where only one computer's time source is involved) might be a good alternative (possibly including server informing client of "processing time" between receiving request and sending reply so client can calculate "networking_latency_alone = (round_trip_time - processing_time) / 2", and where only server's time source is used to calculate processing time). – Brendan Nov 18 '19 at 01:43
  • You're assuming packet in -> packet out, which is not the case. I am literally just asking how i get the number of nanoseconds since midnight on the receiver. – user997112 Nov 18 '19 at 03:33