6

I am porting my application to windows from unix and I have run into a wall. In my application I need to find time in microseconds (the whole application heavily depends on it due to it being a high precision application).

Previously I was using timespec structure, but windows contains no such thing. The command GetTickCount does not suffice because it returns time in milliseconds. I was also thinking of QueryPerformanceFrequency.

Would anyone happen to know something that is as identical to timespec as possible?

In the future I might even require nanoseconds too, which nothing I have searched in windows supports.

Community
  • 1
  • 1
Quillion
  • 6,346
  • 11
  • 60
  • 97
  • What's wrong with QueryPerformanceCounter? – kichik Dec 20 '11 at 23:30
  • @kichik I heard that it acts iffy when used with a dual core cpu. Don't know if it is completely true or not – Quillion Dec 20 '11 at 23:41
  • The new (bundled with vs2015, v14.0.xyz) runtime includes a header which defines a timespec type (actually three different types: _timespec32, _timespec64, and timespec). Unfortunately, there is no accompanying macro (eg, _TIMESPEC_DEFINED) to test for the presence of this type (argh). – C Tucker Jul 13 '16 at 00:10

2 Answers2

10

See, for example, How to realise long-term high-resolution timing on windows using C++? and C++ Timer function to provide time in nano seconds.

I have done some testing with Cygwin under Windows XP: on my machine, the granularity of gettimeofday() is about 15 msecs (~1/64 secs). Which is quite coarse. And so is the granularity of:

* clock_t clock(void) (divisor CLOCKS_PER_SEC)
* clock_t times(struct tms *) (divisor sysconf(_SC_CLK_TCK))

Both divisors are 1000 (POSIX may have 1000000 for first).

Also, clock_getres(CLOCK_REALTIME,...) returns 15 msecs, so clock_gettime() is unlikely to help. And CLOCK_MONOTONIC and CLOCK_PROCESS_CPUTIME_ID don't work.

Other possibilites for Windows might be RDTSC; see the Wikipedia article. And HPET, which isn't available with Windows XP.

Also note in Linux, clock() is the process time, while in Windows it is the wall time.

So some sample code, both for standard Unix, and for CYGWIN code running under Windows, which gives a granularity of about 50 microsecs (on my machine). The return value is in seconds, and gives the number of seconds elapsed since the function was first called. (I belatedly realized this was in an answer I gave over a year ago).

#ifndef __CYGWIN32__
double RealElapsedTime(void) { // returns 0 seconds first time called
   static struct timeval t0;
   struct timeval tv;
   gettimeofday(&tv, 0);
   if (!t0.tv_sec)
      t0 = tv;
   return tv.tv_sec - t0.tv_sec + (tv.tv_usec - t0.tv_usec) / 1000000.;
}
#else
#include <windows.h>
double RealElapsedTime(void) { // granularity about 50 microsecs on my machine
   static LARGE_INTEGER freq, start;
   LARGE_INTEGER count;
   if (!QueryPerformanceCounter(&count))
      FatalError("QueryPerformanceCounter");
   if (!freq.QuadPart) { // one time initialization
      if (!QueryPerformanceFrequency(&freq))
         FatalError("QueryPerformanceFrequency");
      start = count;
   }
   return (double)(count.QuadPart - start.QuadPart) / freq.QuadPart;
}
#endif
Community
  • 1
  • 1
Joseph Quinsey
  • 9,553
  • 10
  • 54
  • 77
  • All in all it seems like a solid answer, but I can't seem to figure out what why a double value will be returned or what it will represent. I understand that you find difference and then divide by frequency. So if I assume that double value is in seconds, then in order to find microseconds I have to multiply by 1 million (1 000 000). Is that right? – Quillion Dec 20 '11 at 23:51
  • The return value is in `seconds`, and gives the number of seconds elapsed since the function was first called. – Joseph Quinsey Dec 20 '11 at 23:57
  • The [GNU C Lib Manual](http://www.gnu.org/s/hello/manual/libc/Elapsed-Time.html) notes there are `"some peculiar operating systems where the tv_sec member has an unsigned type."` For these, the above code needs a cast, or the assumption that the clock never runs backwards. – Joseph Quinsey Dec 21 '11 at 01:43
  • And see [Micro Second resolution timestamps on windows](http://stackoverflow.com/q/2414359/318716), especially Andras Vass's [answer](http://stackoverflow.com/a/2457919/318716). – Joseph Quinsey Dec 21 '11 at 02:09
  • 1
    Oh this is excellent. Thank you very much for all your help! I really appreciate it! – Quillion Dec 21 '11 at 15:24
4

Portable between Windows, UNIX, Linux and anything vaguely modern: std::chrono::high_resolution_clock. Resolution may vary, but you can find out at compile time what it is. Nanoseconds is certainly possible on modern hardware.

Keep in mind that nanosecond precision really means a sub-meter precision. A nanosecond at lightspeed is only 30 centimeters. Moving your computer from the top of rack to the bottom is literally moving it by several nanoseconds.

MSalters
  • 173,980
  • 10
  • 155
  • 350