6

**********************Original edit**********************


I am using different kind of clocks to get the time on Linux systems:

rdtsc, gettimeofday, clock_gettime

and already read various questions like these:

But I am a little confused:


What is the difference between granularity, resolution, precision, and accuracy?


Granularity (or resolution or precision) and accuracy are not the same things (if I am right ...)

For example, while using the "clock_gettime" the precision is 10 ms as I get with:

struct timespec res;
clock_getres(CLOCK_REALTIME, &res):

and the granularity (which is defined as ticks per second) is 100 Hz (or 10 ms), as I get when executing:

 long ticks_per_sec = sysconf(_SC_CLK_TCK);

Accuracy is in nanosecond, as the above code suggest:

struct timespec gettime_now;

clock_gettime(CLOCK_REALTIME, &gettime_now);
time_difference = gettime_now.tv_nsec - start_time;

In the link below, I saw that this is the Linux global definition of granularity and it's better not to change it:

http://wwwagss.informatik.uni-kl.de/Projekte/Squirrel/da/node5.html#fig:clock:hw

So my question is If this remarks above were right, and also:

a) Can we see what is the granularity of rdtsc and gettimeofday (with a command)?

b) Can we change them (with any way)?


**********************Edit number 2**********************

I have tested some new clocks and I will like to share information:

a) In the page below, David Terei, did a fine program that compares various clock and their performances:

https://github.com/dterei/Scraps/tree/master/c/time

b) I have also tested omp_get_wtime as Raxman suggested by and I found a precision in nsec, but not really better than "clock_gettime (as they did in this website):

http://msdn.microsoft.com/en-us/library/t3282fe5.aspx

I think it's a Windows-oriented time function.

Better results are given with clock_gettime using CLOCK_MONOTONIC than when using CLOCK_REALTIME. That's normal, because the first calculates PROCESSING time and the other REAL TIME respectively

c) I have found also the Intel function ippGetCpuClocks, but not I've not tested it because it's mandatory to register first:

http://software.intel.com/en-us/articles/ipp-downloads-registration-and-licensing/

... or you may use a trial version

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
user2307229
  • 135
  • 1
  • 1
  • 9
  • 1
    You forgot my favorite one: omp_get_wtime(). It's the simplest way to get computing time and it works on GCC, MSVC, MINGW, and ICC (at least all the versions I have installed) –  May 24 '13 at 17:21
  • You had asked question as "How is the microsecond time of linux gettimeofday() obtained and what is its accuracy?." There's a counter that is initialized to 0 at system boot ,so it represents the number of clock ticks since last boot . The counter is a 64 bit variable called as jiffies .everytime an interrupt occurs internal counter is incremented .And do_gettimeofday has near microsecond resolution and it asks the timing hardware what fraction of jiffy has elapsed already .The precisions vary from hardware to hardware as it depends on the hardware mechanisms in use. – Santhosh Pai May 24 '13 at 17:34
  • Did you read [time(7)](http://man7.org/linux/man-pages/man7/time.7.html) man page? It provides interesting information. – Basile Starynkevitch May 24 '13 at 21:22
  • Hi @raxman I have tested omp_get_wtime() and it's pretty good but not better than gettimeofday or clock_gettime as preciion is conerned, but simply and practical! – user2307229 May 28 '13 at 11:31
  • @Bastie : it really clarifies a lot of things, thnx! – user2307229 May 28 '13 at 11:34
  • @Santhosh , thnx for all infos i didn't know about "jiffies" – user2307229 May 28 '13 at 12:01
  • FWIW, On Linux with GCC, omp_get_wtime() calls clock_gettime(CLOCK_MONOTONIC, ...). – janneb May 28 '13 at 19:16

1 Answers1

24
  • Precision is the amount of information, i.e. the number of significant digits you report. (E.g. I am 2 m, 1.8 m, 1.83 m, and 1.8322 m tall. All those measurements are accurate, but increasingly precise.)

  • Accuracy is the relation between the reported information and the truth. (E.g. "I'm 1.70 m tall" is more precise than "1.8 m", but not actually accurate.)

  • Granularity or resolution are about the smallest time interval that the timer can measure. For example, if you have 1 ms granularity, there's little point reporting the result with nanosecond precision, since it cannot possibly be accurate to that level of precision.

On Linux, the available timers with increasing granularity are:

  • clock() from <time.h> (20 ms or 10 ms resolution?)

  • gettimeofday() from Posix <sys/time.h> (microseconds)

  • clock_gettime() on Posix (nanoseconds?)

In C++, the <chrono> header offers a certain amount of abstraction around this, and std::high_resolution_clock attempts to give you the best possible clock.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Kerrek SB
  • 464,522
  • 92
  • 875
  • 1,084