2

I am writing some test code where I need nanosecond resolution. When I use clock_gettime with CLOCK_MONOTONIC, i get a value I expect: 3327.874384321. When i use clock_gettime with CLOCK_MONOTONIC_RAW, i get a value that i do not expect: 3327.875723000

I've run this in a loop, and ALL of the values returned have the nanosecond resolution "truncated", 000.

Output from uname -a: Linux raspberrypi 3.12.22+ #691 PREEMPT Wed Jun 18 18:29:58 BST 2014 armv6l GNU/Linux

Thoughts on what is happening? How to address? I am currently considering disabling NTP so I can use CLOCK_MONOTONIC

Nick
  • 2,735
  • 1
  • 29
  • 36
  • What does NTP have to do with this? Why do you want the RAW clock? – R.. GitHub STOP HELPING ICE Aug 30 '14 at 19:43
  • Based on various readings, CLOCK_MONOTONIC can be adjusted by NTP, thus creating the possibility of any sample being contaminated by the clock's adjustment. Perhaps i've misunderstood... Heres a stackoverflow ref: http://stackoverflow.com/questions/14270300/what-is-the-difference-between-clock-monotonic-clock-monotonic-raw – Nick Aug 30 '14 at 19:48
  • As the answers to that question stated, `CLOCK_MONOTONIC` **does not reflect discontinuities** from setting the time or from NTP adjustments. Rather, its rate of advance is just adjusted to correct for the imprecision of the hardware clock frequency with respect to real time. – R.. GitHub STOP HELPING ICE Aug 31 '14 at 00:49
  • R- Agreed about NTP changing the rate (or frequency), and therefore can skew results, or am I still misunderstanding? – Nick Aug 31 '14 at 18:53
  • 1
    If NTP changes the rate, it's just making it closer to correct, versus your hardware which is skewed (clock running too fast or too slow). I don't see why you would prefer the less-correct values. – R.. GitHub STOP HELPING ICE Sep 01 '14 at 21:28

1 Answers1

4

I think your conclusion that CLOCK_MONOTONIC_RAW is "truncated" is wrong. Rather, the resolution of the hardware clock source is probably just microseconds. The nonzero low digits you're seeing in CLOCK_MONOTONIC are because the timestamps from the hardware clock source are being scaled, per adjustments made via adjtime/NTP, to correct for imprecision in the hardware clock rate that would otherwise make it drift relative to real time.

To test this hypothesis, you should take a large number of timer samples with CLOCK_MONOTONIC and look for a pattern in the low digits. I suspect you'll find that all your timestamps differ by a multiple of some number of nanoseconds close to but not exactly 1000, e.g. maybe 995 or 1005 or so.

R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711
  • I see what you are saying which is why i did this: clock_getres( CLOCK_MONOTONIC, &res ); printf("Monotonic res: %lld.%.9ld\n", (long long)res.tv_sec, res.tv_nsec); clock_getres( CLOCK_MONOTONIC_RAW, &res ); printf("Monotonic res raw: %lld.%.9ld\n", (long long)res.tv_sec, res.tv_nsec); Monotonic res: 0.000000001 Monotonic res raw: 0.000000001. Both show the same value. Would linux lie on getres? (sorry for the poor formatting) – Nick Aug 31 '14 at 18:54
  • One other possibility, though it's not documented, is that `CLOCK_MONOTONIC_RAW` might be showing the un-adjusted values for `CLOCK_MONOTONIC_COARSE`, not for `CLOCK_MONOTONIC`. You'd probably need to read the fine source to find out for sure. – R.. GitHub STOP HELPING ICE Sep 01 '14 at 21:29
  • Is it accurrate to say that in order to have true nanosecond resolution, the processor must run >1GHz? Can we say since the Raspberry Pi only runs at about 700 MHz, that its impossible for it to have nanosecond resolution? – Nick Sep 02 '14 at 04:55
  • @Nick: No, that's not true. The clock could come from a source independent from the cpu clock, in which case it could have higher resolution. You might not be able to sample it often enough to see the full resolution, but at 700 MHz the kernel would still need to report 1ns as the resolution since there's no way to report 1.4ns. – R.. GitHub STOP HELPING ICE Sep 02 '14 at 08:40