51

I'm using time.h in C++ to measure the timing of a function.

clock_t t = clock();
someFunction();
printf("\nTime taken: %.4fs\n", (float)(clock() - t)/CLOCKS_PER_SEC);

however, I'm always getting the time taken as 0.0000. clock() and t when printed separately, have the same value. I would like to know if there is way to measure the time precisely (maybe in the order of nanoseconds) in C++ . I'm using VS2010.

NinjaDeveloper
  • 1,620
  • 3
  • 19
  • 51
Abhishek Thakur
  • 16,337
  • 15
  • 66
  • 97
  • 1
    You may be experiencing the [Microsoft Minute](http://www.userfriendly.org/cartoons/archives/99mar/19990318.html). – jww Apr 30 '18 at 00:58

4 Answers4

105

C++11 introduced the chrono API, you can use to get nanoseconds :

auto begin = std::chrono::high_resolution_clock::now();

// code to benchmark

auto end = std::chrono::high_resolution_clock::now();
std::cout << std::chrono::duration_cast<std::chrono::nanoseconds>(end-begin).count() << "ns" << std::endl;

For a more relevant value it is good to run the function several times and compute the average :

auto begin = std::chrono::high_resolution_clock::now();
uint32_t iterations = 10000;
for(uint32_t i = 0; i < iterations; ++i)
{
    // code to benchmark
}
auto end = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::nanoseconds>(end-begin).count();
std::cout << duration << "ns total, average : " << duration / iterations << "ns." << std::endl;

But remember the for loop and assigning begin and end var use some CPU time too.

Axel Guilmin
  • 11,454
  • 9
  • 54
  • 64
  • 9
    When you are targeting Windows only, `QueryPerformanceFrequency` is still a better choice. Indeed `high_resolution_clock`under VS11 and VS12 is simply a typdef on a `system_clock` that provides mediocre resolution. Only in VS14 has this been recently fixed. https://connect.microsoft.com/VisualStudio/feedback/details/719443/c-chrono-headers-high-resolution-clock-does-not-have-high-resolution – P-Gn Jul 31 '15 at 11:12
  • 1
    the high_resolution_clock is **not** guaranteed to be monotonic or steady. it can jump back and forth when the system syncs its time (NTP), in leap seconds, or incremental adjustments. if you want to measure elapsed time, use the steady_clock – Alba Mendez Sep 01 '21 at 11:07
  • (if using VS17~22, MSVC) `high_resolution_clock` is just `steady_clock`. But @AlbaMendez is right, considering the language standard. And `steady_clock` just uses `_Query_perf_frequency` and `_Query_perf_counter`. – starriet Jun 25 '23 at 13:32
58

I usually use the QueryPerformanceCounter function.

example:

LARGE_INTEGER frequency;        // ticks per second
LARGE_INTEGER t1, t2;           // ticks
double elapsedTime;

// get ticks per second
QueryPerformanceFrequency(&frequency);

// start timer
QueryPerformanceCounter(&t1);

// do something
...

// stop timer
QueryPerformanceCounter(&t2);

// compute and print the elapsed time in millisec
elapsedTime = (t2.QuadPart - t1.QuadPart) * 1000.0 / frequency.QuadPart;
Constantinius
  • 34,183
  • 8
  • 77
  • 85
  • Note) The last line can be improved, e.g. overflow may occur right now. The MSVC implementation of `std::chrono::steady_clock::now` solves this problem, so have a look. Also, recent MSVC in VS22 tries to optimize `steady_clock::now` further by using common frequency value on modern PCs. FYI, _Query_perf_counter uses QueryPerformanceCounter(see crt/src/stl/xtime.cpp). – starriet Jun 26 '23 at 00:59
6

The following text, that i completely agree with, is quoted from Optimizing software in C++ (good reading for any C++ programmer) -

The time measurements may require a very high resolution if time intervals are short. In Windows, you can use the GetTickCount or QueryPerformanceCounter functions for millisecond resolution. A much higher resolution can be obtained with the time stamp counter in the CPU, which counts at the CPU clock frequency.

There is a problem that "the clock frequency may vary dynamically and that measurements are unstable due to interrupts and task switches."

SChepurin
  • 1,814
  • 25
  • 17
2

In C or C++ I usually do like below. If it still fails you may consider using rtdsc functions

      struct timeval time;
      gettimeofday(&time, NULL); // Start Time

      long totalTime = (time.tv_sec * 1000) + (time.tv_usec / 1000);

          //........ call your functions here

        gettimeofday(&time, NULL);  //END-TIME

        totalTime = (((time.tv_sec * 1000) + (time.tv_usec / 1000)) - totalTime);
hmatar
  • 2,437
  • 2
  • 17
  • 27
  • 2
    x += y - x makes no sense, it's equivalent to x = y which in your case would discard the old value – foolo Sep 27 '16 at 14:12
  • Yes, it should be `totalTime = (...) - totalTime;` to calculate the difference between the first and second calls to gettimeofday. it is the `+=` that is wrong. – Jesse Chisholm Sep 30 '16 at 19:42