2

I am printing microseconds continuously using gettimeofday(). As given in program output you can see that the time is not updated microsecond interval rather its repetitive for certain samples then increments not in microseconds but in milliseconds.

while(1)
{
  gettimeofday(&capture_time, NULL);
  printf(".%ld\n", capture_time.tv_usec);
}

Program output:

.414719
.414719
.414719
.414719
.430344
.430344
.430344
.430344

 e.t.c

I want the output to increment sequentially like,

.414719
.414720
.414721
.414722
.414723

or

.414723, .414723+x, .414723+2x, .414723 +3x + ...+ .414723+nx

It seems that microseconds are not refreshed when I acquire it from capture_time.tv_usec.

================================= //Full Program

#include <iostream>
#include <windows.h>
#include <conio.h>
#include <time.h>
#include <stdio.h>

#if defined(_MSC_VER) || defined(_MSC_EXTENSIONS)
  #define DELTA_EPOCH_IN_MICROSECS  11644473600000000Ui64
#else
  #define DELTA_EPOCH_IN_MICROSECS  11644473600000000ULL
#endif

struct timezone 
{
  int  tz_minuteswest; /* minutes W of Greenwich */
  int  tz_dsttime;     /* type of dst correction */
};

timeval capture_time;  // structure

int gettimeofday(struct timeval *tv, struct timezone *tz)
{
  FILETIME ft;
  unsigned __int64 tmpres = 0;
  static int tzflag;

  if (NULL != tv)
  {
    GetSystemTimeAsFileTime(&ft);

    tmpres |= ft.dwHighDateTime;
    tmpres <<= 32;
    tmpres |= ft.dwLowDateTime;

    /*converting file time to unix epoch*/
    tmpres -= DELTA_EPOCH_IN_MICROSECS; 
    tmpres /= 10;  /*convert into microseconds*/
    tv->tv_sec = (long)(tmpres / 1000000UL);
    tv->tv_usec = (long)(tmpres % 1000000UL);
  }

  if (NULL != tz)
  {
    if (!tzflag)
    {
      _tzset();
      tzflag++;
    }

    tz->tz_minuteswest = _timezone / 60;
    tz->tz_dsttime = _daylight;
  }

  return 0;
}

int main()
{
   while(1)
  {     
    gettimeofday(&capture_time, NULL);     
    printf(".%ld\n", capture_time.tv_usec);// JUST PRINTING MICROSECONDS    
   }    
}
Cœur
  • 37,241
  • 25
  • 195
  • 267
Osaid
  • 557
  • 1
  • 8
  • 23
  • I'm assuming that you have a modern computer that can execute a large number of instructions per second (indeed milisecond - hint hint) maybe you should look at a higher precision timer? – Caribou Nov 01 '12 at 11:02
  • 2
    Your question is a bit confusing because it's not _actually_ talking about the POSIX standard [`gettimeofday`](http://pubs.opengroup.org/onlinepubs/9699919799/functions/gettimeofday.html) function but a Win32 API call (http://msdn.microsoft.com/en-us/library/windows/desktop/ms724397(v=vs.85).aspx). Please edit your question & title to reflect which of those you're talking about. Also note that neither of those guarantee clock resolution for these. – Mat Nov 01 '12 at 11:02
  • Hey why don't you try the hardware clock. check the `QueryPerformanceFrequency` , `QueryPerformanceCounter` functions at MSDN. a link : http://msdn.microsoft.com/en-us/library/windows/desktop/ms644904%28v=vs.85%29.aspx – Deamonpog Nov 01 '12 at 11:33
  • 1
    It increments by 15.625 milliseconds, or 1/64 seconds, the common clock interrupt rate on Windows machines. You can never get steady incrementing values, printf() takes longer than a microsecond. – Hans Passant Nov 01 '12 at 11:45
  • @Mat: 1st: Your msdn link has no content. 2nd: Osaid build a `private gettimeofday`, weired though. But it seems clear windows (include). – Arno Nov 01 '12 at 11:47
  • @Arno: comment formatting bug, remove the ending `)` from the link. Nothing is clear from the title or the tags. – Mat Nov 01 '12 at 11:53
  • @Mat: Modified title because it is not `gettimeofday` but a private function, unfortunately named so. – Arno Nov 01 '12 at 11:57

4 Answers4

7

The change in time you observe is 0.414719 s to 0.430344 s. The difference is 15.615 ms. The fact that the representation of the number is microsecond does not mean that it is incremented by 1 microsecond. In fact I would have expected 15.625 ms. This is the system time increment on standard hardware. I've given a closer look here and here. This is called granularity of the system time.

Windows:

However, there is a way to improve this, a way to reduce the granularity: The Multimedia Timers. Particulary Obtaining and Setting Timer Resolution will disclose a way to increase the systems interrupt frequency.

The code:

#define TARGET_PERIOD 1         // 1-millisecond target interrupt period


TIMECAPS tc;
UINT     wTimerRes;

if (timeGetDevCaps(&tc, sizeof(TIMECAPS)) != TIMERR_NOERROR) 
// this call queries the systems timer hardware capabilities
// it returns the wPeriodMin and wPeriodMax with the TIMECAPS structure
{
  // Error; application can't continue.
}

// finding the minimum possible interrupt period:

wTimerRes = min(max(tc.wPeriodMin, TARGET_PERIOD ), tc.wPeriodMax);
// and setting the minimum period:

timeBeginPeriod(wTimerRes); 

This will force the system to run at its maximum interrupt frequency. As a consequence also the update of the system time will happen more often and the granularity of the system time increment will be close to 1 milisecond on most systems.

When you deserve resolution/granularity beyond this, you'd have to look into QueryPerformanceCounter. But this is to be used with care when using it over longer periods of time. The frequency of this counter can be obtained by a call to QueryPerformanceFrequency. The OS considers this frequency as a constant and will give the same value all time. However, some hardware produces this frequency and the true frequency differs from the given value. It has an offset and it shows thermal drift. Thus the error shall be assumed in the range of several to many microseconds/second. More details about this can be found in the second "here" link above.

Linux:

The situation looks somewhat different for Linux. See this to get an idea. Linux mixes information of the CMOS clock using the function getnstimeofday (for seconds since epoch) and information from a high freqeuncy counter (for the microseconds) using the function timekeeping_get_ns. This is not trivial and is questionable in terms of accuracy since both sources are backed by different hardware. The two sources are not phase locked, thus it is possible to get more/less than one million microseconds per second.

Community
  • 1
  • 1
Arno
  • 4,994
  • 3
  • 39
  • 63
  • I do understand that representation in microseconds does not represents increment in microsecond. I will look in to the granularity aspect. – Osaid Nov 01 '12 at 13:01
  • @Osaid: Just use the code I provided [here](http://stackoverflow.com/a/11743614/1504523) to investigate the granularity of `GetSystemTimeAsFileTime`. – Arno Nov 01 '12 at 13:04
  • The spooky thing is that Winpcap (PacketCapture library) successfully uses the epoch and gets accurate time up to a microsecond for the packet captured despite the granularity. I have successfully used its (header->ts.tv_usec) to achieve my goal for time stamping and have verified by running wireshark in parallel. Both time match by a constant gap of 2 microseconds. – Osaid Nov 01 '12 at 13:06
  • @Osaid: The timestamp, for example returned in the header argument of `pcap_next_ex(...)` is filled by the adapter. You can't run pcap without an adapter. **Thus:** If there is hardware which provides better resolution, you may get better resolution. Hint: There are a number of high resolution hardware clocks available for PCs. For example [GPS cards](http://www.eurotech.com/en/products/COM-1480) may offer 20ns accuracy. – Arno Nov 01 '12 at 13:23
  • I have programmed to select the adapter and capture the packet using winpcap in VC2010. – Osaid Nov 01 '12 at 13:23
  • Thanks for the Microsecond Resolution Time Services for Windows – Osaid Nov 01 '12 at 13:40
  • what significant does 31249us holds like 15625us (granularity)? I am also coming up with this figure 31249us too. By the look of it its just 2 times 15625us but its strange why am I getting 15625us at one time and 31249us the other. – Osaid Nov 01 '12 at 16:06
  • 1
    @Osaid: It os often believed that the system time update happens at regular periods. This is FALSE. There are systems where the interrupt period does not match the time increment. In these cases a complex algorithm is applied to find beat frequencies and correct for those, sometimes by a different time increment, sometimes by doing two of them. Also: When a system time adjustment is happening, the increments vary. See [GetSystemTimeAdjustment](http://msdn.microsoft.com/en-us/library/windows/desktop/ms724394%28v=vs.85%29.aspx) for more details on system time adjustment. – Arno Nov 01 '12 at 16:24
  • **And:** The fact that you do observe an increment of 15.615 ms instead of the typical 15.625 ms indicates that you have such a beat frequency on your system. **Hint:** Run the test for a longer time and see what the period of the `double increment` is. My guess: The time between two occurances is the [least common multiple](http://en.wikipedia.org/wiki/Least_common_multiple) of 15.625 ms and 15.615 ms, thus it may take some time time to see the reoccurance. – Arno Nov 01 '12 at 16:37
  • let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/18930/discussion-between-arno-and-osaid) – Arno Nov 01 '12 at 17:27
  • Linux `gettimeofday` on modern x86 uses `rdtsc` as a time-source, with a scale factor [exported to user-space by the kernel in a VDSO page](https://blog.packagecloud.io/eng/2016/04/05/the-definitive-guide-to-linux-system-calls/). (So it has extremely low overhead, not even entering the kernel, and granularity of CPU reference frequency, e.g. 4GHz, period = 1/4 of a nanosecond.) This assumes that the CPU is new enough to use RDTSC as the timesource, i.e. it has the constant_tsc and invariant_tsc features so it counts ref cycles not actual core clock cycles, and doesn't stop on HLT.) – Peter Cordes May 02 '18 at 06:13
1

The Windows system clock only ticks every few milliseconds -- in your case 64 times per second, so when it does tick it increases the system time by 15.625 ms.

The solution is to use a higher-resolution timer that the system time (QueryPerformanceCounter).

You still won't see .414723, .414723+x, .414723+2x, .414723 +3x + ...+ .414723+nx, though, because you code will not run exactly once every x microseconds. It will run as fast as it can, but there's no particular reason that should always be a constant speed, or that if it is then it's an integer number of microseconds.

Steve Jessop
  • 273,490
  • 39
  • 460
  • 699
1

I recommend you to look at the C++11 <chrono> header.

high_resolution_clock (C++11) the clock with the shortest tick period available

The tick period referred to here is the frequency at which the clock is updated. If we look in more details:

template<
     class Rep,
     class Period = std::ratio<1>
> class duration;

Class template std::chrono::duration represents a time interval.

It consists of a count of ticks of type Rep and a tick period, where the tick period is a compile-time rational constant representing the number of seconds from one tick to the next.

Previously, functions like gettimeofday would give you a time expressed in microseconds, however they would utterly fail to tell you the interval at which this time expression was refreshed.

In the C++11 Standard, this information is now in the clear, to make it obvious that there is no relation between the unit in which the time is expressed and the tick period. And that, therefore, you definitely need to take both into accounts.

The tick period is extremely important when you want to measure durations that are close to it. If the duration you wish to measure is inferior to the tick period, then you will measure it "discretely" like you observed: 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, ... I advise caution at this point.

Matthieu M.
  • 287,565
  • 48
  • 449
  • 722
0

This is because the process running your code isn't always scheduled to execute.

Whilst it does, it will bang round the loop quickly, printing multiple values for each microsecond - which is a comparatively long period of time on modern CPUs.

There are then periods where it is not scheduled to execute by the system, and therefore cannot print values.

If what you want to do is execute every microsecond, this may be possible with some real-time operating systems running on high performance hardware.

marko
  • 9,029
  • 4
  • 30
  • 46
  • Thanks for the prompt reply. I don't want to execute a process every microsecond but its that I want real time TIME update from the system in microseconds so that I can use it as a time stamp. – Osaid Nov 01 '12 at 11:27
  • As we can see that wire shark does displays time up to nano seconds (packet timestamps) running on windows PC. – Osaid Nov 01 '12 at 11:28
  • Sorry but this answer not explaining the observed behavior. – Arno Nov 01 '12 at 11:52