140

How do I call clock() in C++?

For example, I want to test how much time a linear search takes to find a given element in an array.

Borislav Kostov
  • 68
  • 1
  • 10
  • 1
    Note that wall-clock time isn't always a good way to time microbenchmarks. To get consistent results, you have to work around CPU frequency-scaling (including Intel [turbo](https://en.wikipedia.org/wiki/Intel_Turbo_Boost) or the AMD equivalent, which lets your CPU clock higher when thermal/power limits allow). Profiling with performance counters can give you measurements in core clock cycles (and also details about whether a bottleneck is cache misses vs. instruction throughput vs. latency, by looking at counters other than just cycles). On Linux, `perf stat -d ./a.out` – Peter Cordes Sep 21 '17 at 07:59

7 Answers7

220
#include <iostream>
#include <cstdio>
#include <ctime>

int main() {
    std::clock_t start;
    double duration;

    start = std::clock();

    /* Your algorithm here */

    duration = ( std::clock() - start ) / (double) CLOCKS_PER_SEC;

    std::cout<<"printf: "<< duration <<'\n';
}
Dolph
  • 49,714
  • 13
  • 63
  • 88
  • 5
    From what I can see here http://www.cplusplus.com/reference/ctime/clock/, you don't need use the "std::" notation. Just use "clock()" – birgersp Jan 28 '16 at 18:23
  • 4
    @Birger: In all the project I worked yet the code style requires std:: before every std:: call. – Th. Thielemann Jan 23 '17 at 08:31
  • 3
    Does this return the answer in seconds? – Arnav Borborah May 18 '17 at 12:54
  • 1
    @ArnavBorborah Yes, it does. – JoeVictor May 31 '17 at 21:38
  • 1
    @Th.Thielemann both `clock()` and `clock_t` are from the C Standard Library's header of `time.h`, and therefore do not need the use of `std` namespace prefixes after the inclusion of their libraries. `` wraps that value and function with the `std` namespace, but it's not required to use. Check here for implementation details: http://en.cppreference.com/w/cpp/header/ctime – kayleeFrye_onDeck Jun 19 '18 at 18:53
  • Is clock() function affected by CPU throttle and other phenomena which could cause CPU speed to damp. – Amit Kaushik Aug 08 '18 at 10:30
  • Is there a more efficient way? The computation for calculating the duration takes 0.007 seconds which is significant. Is there any way that automatically does this? – Gunner Stone Mar 07 '19 at 10:06
  • This is duration in milliseconds, not seconds – simplename Apr 03 '20 at 19:18
78

An alternative solution, which is portable and with higher precision, available since C++11, is to use std::chrono.

Here is an example:

#include <iostream>
#include <chrono>
typedef std::chrono::high_resolution_clock Clock;

int main()
{
    auto t1 = Clock::now();
    auto t2 = Clock::now();
    std::cout << "Delta t2-t1: " 
              << std::chrono::duration_cast<std::chrono::nanoseconds>(t2 - t1).count()
              << " nanoseconds" << std::endl;
}

Running this on ideone.com gave me:

Delta t2-t1: 282 nanoseconds
Martin G
  • 17,357
  • 9
  • 82
  • 98
  • 11
    If you are suggesting to use C++11, you could just as well write `using Clock=std::chrono::high_resolution_clock;`. See [type alias](http://en.cppreference.com/w/cpp/language/type_alias). – JHBonarius Sep 21 '17 at 08:04
  • 1
    `std::chrono::high_resolution_clock` is not monotonic across all std lib implementations. From the cppreference - _Generally one should just use std::chrono::steady_clock or std::chrono::system_clock directly instead of std::chrono::high_resolution_clock: use steady_clock for duration measurements, and system_clock for wall-clock time._ – Kristianmitk Apr 09 '20 at 08:59
32

clock() returns the number of clock ticks since your program started. There is a related constant, CLOCKS_PER_SEC, which tells you how many clock ticks occur in one second. Thus, you can test any operation like this:

clock_t startTime = clock();
doSomeOperation();
clock_t endTime = clock();
clock_t clockTicksTaken = endTime - startTime;
double timeInSeconds = clockTicksTaken / (double) CLOCKS_PER_SEC;
Shirik
  • 3,631
  • 1
  • 23
  • 27
4

On Windows at least, the only practically accurate measurement mechanism is QueryPerformanceCounter (QPC). std::chrono is implemented using it (since VS2015, if you use that), but it is not accurate to the same degree as using QueryPerformanceCounter directly. In particular it's claim to report at 1 nanosecond granularity is absolutely not correct. So, if you're measuring something that takes a very short amount of time (and your case might just be such a case), then you should use QPC, or the equivalent for your OS. I came up against this when measuring cache latencies, and I jotted down some notes that you might find useful, here; https://github.com/jarlostensen/notesandcomments/blob/master/stdchronovsqcp.md

SonarJetLens
  • 386
  • 1
  • 9
0
#include <iostream>
#include <ctime>
#include <cstdlib> //_sleep()  --- just a function that waits a certain amount of milliseconds

using namespace std;

int main()
{

    clock_t cl;     //initializing a clock type

    cl = clock();   //starting time of clock

    _sleep(5167);   //insert code here

    cl = clock() - cl;  //end point of clock

    _sleep(1000);   //testing to see if it actually stops at the end point

    cout << cl/(double)CLOCKS_PER_SEC << endl;  //prints the determined ticks per second (seconds passed)


    return 0;
}

//outputs "5.17"
Garrett
  • 17
  • 1
  • 1
    This does not add to the already answered question. Sleep after cl = clock() - cl is not needed. And the cout prints seconds not ticks per second. cl stores the clock ticks. – Ricardo González Nov 09 '17 at 17:53
0

You can measure how long your program works. The following functions help measure the CPU time since the start of the program:

  • C++ (double)clock() / CLOCKS_PER_SEC with ctime included.
  • Python time.clock() returns floating-point value in seconds.
  • Java System.nanoTime() returns long value in nanoseconds.

My reference: algorithms toolbox week 1 course part of data structures and algorithms specialization by University of California San Diego & National Research University Higher School of Economics

So you can add this line of code after your algorithm:

cout << (double)clock() / CLOCKS_PER_SEC;

Expected Output: the output representing the number of clock ticks per second

  • 1
    The question is ask only for c++. So it is nice you reference to other programming languages/scripts, but it is out of topic. – dboy May 29 '20 at 13:14
-1

Probably you might be interested in timer like this : H : M : S . Msec.

the code in Linux OS:

#include <iostream>
#include <unistd.h>

using namespace std;
void newline(); 

int main() {

int msec = 0;
int sec = 0;
int min = 0;
int hr = 0;


//cout << "Press any key to start:";
//char start = _gtech();

for (;;)
{
        newline();
                if(msec == 1000)
                {
                        ++sec;
                        msec = 0;
                }
                if(sec == 60)
                {
                        ++min;
                        sec = 0; 
                }
                if(min == 60)
                {
                        ++hr;
                        min = 0;
                }
        cout << hr << " : " << min << " : " << sec << " . " << msec << endl;
        ++msec;
        usleep(100000); 

}

    return 0;
}

void newline()
{
        cout << "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n";
}
Farid Alijani
  • 839
  • 1
  • 7
  • 25
  • You may want to check the first condition ... 10 msec = 1 sec? – Ricardo González Nov 09 '17 at 17:55
  • 2
    This will accumulate relative error in the time because you don't include the time it takes to print, and `usleep` won't always return after exactly the amount you ask for. Sometimes it will be longer. You should check the current time at the start, then check the current time and subtract to get the absolute time since you started every time through the loop. – Peter Cordes Dec 29 '17 at 19:41