5

I have the following code:

long long unsigned int GetCurrentTimestamp()
{
   LARGE_INTEGER res;
   QueryPerformanceCounter(&res);
   return res.QuadPart;
}


long long unsigned int initalizeFrequency()
{
   LARGE_INTEGER res;
   QueryPerformanceFrequency(&res);
   return res.QuadPart;
}


//start time stamp
boost::posix_time::ptime startTime = boost::posix_time::microsec_clock::local_time();
long long unsigned int start = GetCurrentTimestamp();


// ....
// execution that should be measured
// ....

long long unsigned int end = GetCurrentTimestamp();
boost::posix_time::ptime endTime = boost::posix_time::microsec_clock::local_time();
boost::posix_time::time_duration duration = endTime - startTime;
std::cout << "Duration by Boost posix: " << duration.total_microseconds() <<std::endl;
std::cout << "Processing time is " << ((end - start) * 1000000 / initalizeFrequency()) 
            << " microsec "<< std::endl;

Result of this code is

Duration by Boost posix: 0
Processing time is 24 microsec

Why there is such a big divergence? Boost sucks as much as it should measure microseconds but it measures microseconds with tenth of microseconds error???

Narek
  • 38,779
  • 79
  • 233
  • 389

2 Answers2

4

Posix time: microsec_clock:

Get the UTC time using a sub second resolution clock. On Unix systems this is implemented using GetTimeOfDay. On most Win32 platforms it is implemented using ftime. Win32 systems often do not achieve microsecond resolution via this API. If higher resolution is critical to your application test your platform to see the achieved resolution.

ftime simply does not provide microsecond resolution. The argument may contain the word microsecond but the implementation does not provide any accuracy in that range. It's granularity is in the ms regime.

You'd get something different than ZERO when you operation needs more time, say more than at least 20ms.

Edit: Note: In the long run the microsec_clock implementation for Windows should use the GetSystemTimePreciseAsFileTime function when possible (min. req. Windows 8 desktop, Windows Server 2012 desktop) to achieve microsecond resolution.

Arno
  • 4,994
  • 3
  • 39
  • 63
  • This simply means that boost::posix_time::microsec_clock is a lie :)) – Narek Oct 19 '12 at 11:27
  • You can't quite say it's a lie. It makes sense to let the values carry microseconds because the values do carry information in the microsecond range. But the granularity is not in that range. A typical scenario is that the clock andvances by 156,250 100 ns untis. This is 15.625 ms, in other words 15 ms and 625 microseconds. **But the clock advaces by that much. It has such a granularity.** – Arno Oct 21 '12 at 12:36
2

Unfortunately current Boost implementation of boost::posix_time::microsec_clock doesn't uses QueryPerformanceCounter Win32 API, it uses GetSystemTimeAsFileTime instead which in its turn uses GetSystemTime. But system time resolution is milliseconds (or even worse).

Rost
  • 8,779
  • 28
  • 50
  • It couldn't use QueryPerformanceCounter because there's no way to relate that to absolute real-world times, which is the purpose of the boost::date_time library. Use boost chrono (or now std::chrono) if you just want to measure how long something took with high accurracy. – Arthur Tacca Mar 06 '19 at 11:56