I wrote a simple program to determine if i can get nanosecond precision on my system, which is a RHEL 5.5 VM (kernel 2.6.18-194).
// cc -g -Wall ntime.c -o ntime -lrt
#include <inttypes.h>
#include <stdint.h>
#include <stdio.h>
#include <time.h>
#include <unistd.h>
#include <stdlib.h>
int main(int argc, char* argv[]) {
struct timespec spec;
printf("CLOCK_REALTIME - \"Systemwide realtime clock.\":\n");
clock_getres(CLOCK_REALTIME, &spec);
printf("\tprecision: %ldns\n", spec.tv_nsec);
clock_gettime(CLOCK_REALTIME, &spec);
printf("\tvalue : %010ld.%-ld\n", spec.tv_sec, spec.tv_nsec);
printf("CLOCK_MONOTONIC - \"Represents monotonic time. Cannot be set.\":\n");
clock_getres(CLOCK_MONOTONIC, &spec);
printf("\tprecision: %ldns\n", spec.tv_nsec);
clock_gettime(CLOCK_MONOTONIC, &spec);
printf("\tvalue : %010ld.%-ld\n", spec.tv_sec, spec.tv_nsec);
return 0;
}
A sample output:
CLOCK_REALTIME - "Systemwide realtime clock.":
precision: 999848ns
value : 1504781052.328111000
CLOCK_MONOTONIC - "Represents monotonic time. Cannot be set.":
precision: 999848ns
value : 0026159205.299686941
So REALTIME
gives me the local time and MONOTONIC
the system's uptime. Both clocks seem to have a μs precision (999848ns ≅ 1ms), even though MONOTONIC
outputs in nanoseconds, which is confusing.
man clock_gettime
states:
CLOCK_REALTIME_HR High resolution version of CLOCK_REALTIME.
However, grep -R CLOCK_REALTIME_HR /usr/include/ | wc -l
returns 0
and trying to compile results in error: ‘CLOCK_REALTIME_HR’ undeclared (first use in this function)
.
I was trying to determine if i could get the local time in nanosecond precision, but either my code has a bug or maybe this feature isn't entirely supported in 5.5 (or the VM's HPET is off, or something else).
Can i get local time in nanoseconds in this system? What am i doing wrong?
EDIT
Well the answer seems to be No.
While nanosecond precision can be achieved, the system doesn't guarantee nanosecond accuracy in this scenario (here's a clear answer on the difference rather than a rant). Typical COTS hardware doesn't really handle it (another answer in the right direction).
I'm still curious as to why do the clocks report the same clock_getres
resolution yet MONOTONIC
yields what seems to be nanosecond values while REALTIME
yields microseconds.