398

I am trying to use time() to measure various points of my program.

What I don't understand is why the values in the before and after are the same? I understand this is not the best way to profile my program, I just want to see how long something take.

printf("**MyProgram::before time= %ld\n", time(NULL));

doSomthing();
doSomthingLong();

printf("**MyProgram::after time= %ld\n", time(NULL));

I have tried:

struct timeval diff, startTV, endTV;

gettimeofday(&startTV, NULL); 

doSomething();
doSomethingLong();

gettimeofday(&endTV, NULL); 

timersub(&endTV, &startTV, &diff);

printf("**time taken = %ld %ld\n", diff.tv_sec, diff.tv_usec);

How do I read a result of **time taken = 0 26339? Does that mean 26,339 nanoseconds = 26.3 msec?

What about **time taken = 4 45025, does that mean 4 seconds and 25 msec?

hap497
  • 154,439
  • 43
  • 83
  • 99
  • 10
    I don't understand the question. Of course the values are different. Time passed in between, so `time()` returns a different value. – Thomas May 11 '10 at 06:07
  • 1
    What do you mean "I don't understand is why the values in the before and after are Different"? You're getting the current time (in seconds since Jan 1, 1970) using `time(NULL)` ... the second time you call it will be N seconds after the first and thus ... different (unless whatever it is you're doing doesn't take a second to complete ... in which case, it'll be the same as the first). – Brian Roach May 11 '10 at 06:12
  • 1
    Can you tell us what it prints, and how long it takes if you time it with a stopwatch or a wall clock (or a calendar)? – Matt Curtis May 11 '10 at 06:21
  • 4
    Sorry, I mean both values are the SAME. I mis-type my question. – hap497 May 11 '10 at 06:25
  • 2
    See this thread: http://stackoverflow.com/questions/275004/c-timer-function-to-provide-time-in-nano-seconds – default May 11 '10 at 06:31
  • @hap497 cool, but what does it print (and how long does it take if you time it by hand)? – Matt Curtis May 11 '10 at 07:18
  • Did you ever read this http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2661.htm – Aki la Jan 27 '15 at 17:13
  • Do you want to measure *wall clock time* (time point of process end - time point of process start), or do you want to measure *consumed CPU time*? (Wall clock time can be a fraction of CPU time with multithreading, and CPU time can be a fraction of wall clock time when your process waits on I/O.) You need to use different functions for these. For the later, the easiest approach is to just run your process under control of the `time` utility (no source code modification necessary); this will also provide you with the time the kernel spent to handle the I/O of the process. – cmaster - reinstate monica Mar 01 '16 at 21:34
  • You may be experiencing the [Microsoft Minute](http://www.userfriendly.org/cartoons/archives/99mar/19990318.html). – jww Apr 30 '18 at 00:49
  • If _**easily**_ is what is most important: $ `time ./MyProgram` (linux only). Reference: https://levelup.gitconnected.com/8-ways-to-measure-execution-time-in-c-c-48634458d0f9#ac70. – Henke Nov 26 '20 at 18:38
  • `time(NULL)` has a result in seconds. If the program takes less than a second the start and end time will be the same. The result of timersub is a struct containing integer seconds and integer microseconds (see https://man7.org/linux/man-pages/man2/settimeofday.2.html) So you could print it with `printf("**time taken = %ld.%06ld\n", diff.tv_sec, diff.tv_usec);` – Jerry Jeremiah Apr 12 '21 at 07:28
  • Note that `tv_usec` is *microseconds* not nanoseconds. The `u` is often used instead of μ (lowercase mu) since it isn't available in ASCII (not-extended), hence "usec" rather than "μs". – neuralmer Feb 10 '22 at 20:05

26 Answers26

548
//***C++11 Style:***
#include <chrono>

std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now();
std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now();

std::cout << "Time difference = " << std::chrono::duration_cast<std::chrono::microseconds>(end - begin).count() << "[µs]" << std::endl;
std::cout << "Time difference = " << std::chrono::duration_cast<std::chrono::nanoseconds> (end - begin).count() << "[ns]" << std::endl;
majkel.mk
  • 428
  • 4
  • 13
user3762106
  • 5,787
  • 1
  • 13
  • 7
  • 32
    To run this you have to add the `#include ` directive and I would change the reporting time as: `std::cout << "Time difference (sec) = " << (std::chrono::duration_cast(end - begin).count()) /1000000.0 < – Antonello Nov 29 '16 at 16:04
  • 4
    @RestlessC0bra According to the docs on cppreference, "This clock is not related to wall clock time (for example, it can be time since last reboot), and is most suitable for measuring intervals." – cylus Jan 26 '17 at 17:30
  • @Antonello - You should look more closely at std::chrono::duration_cast, and in particular at std::chrono::seconds if you want the duration to be returned in terms of seconds. – Axel Rietschin Nov 22 '17 at 02:32
  • 2
    @AxelRietschin That would only return the time in seconds. Antonello's suggestion also provides fractional time (i.e. `2.15315` seconds as opposed to just `2` seconds). – rayryeng Apr 24 '18 at 20:53
  • 4
    What data type is this?std::chrono::duration_cast(end - begin).count() – sqp_125 Nov 05 '18 at 10:07
  • One can find the types from [chrono duration docs](https://en.cppreference.com/w/cpp/chrono/duration) – kelalaka Oct 22 '19 at 15:07
  • @sqp_125 It seems to be std::chrono::microseconds::rep – zanbri Jul 19 '20 at 16:43
  • this answer use full types instead of "auto" which make it easier to understand – SubMachine Oct 26 '20 at 21:13
  • @sqp_125 long int – otto Mar 13 '21 at 22:32
  • This is a C++ specific answer, and, despite the C++ tag misapplied by someone, this is clearly a C question. – Chris Dodd Jul 11 '22 at 19:00
333

0 - Delta

Use a delta function to compute time differences:

auto start = std::chrono::steady_clock::now();
std::cout << "Elapsed(ms)=" << since(start).count() << std::endl;

since accepts any timepoint and produces any duration (milliseconds is the default). It is defined as:

template <
    class result_t   = std::chrono::milliseconds,
    class clock_t    = std::chrono::steady_clock,
    class duration_t = std::chrono::milliseconds
>
auto since(std::chrono::time_point<clock_t, duration_t> const& start)
{
    return std::chrono::duration_cast<result_t>(clock_t::now() - start);
}

Demo

1 - Timer

Use a timer based on std::chrono:

Timer clock; // Timer<milliseconds, steady_clock>

clock.tick();
/* code you want to measure */
clock.tock();

cout << "Run time = " << clock.duration().count() << " ms\n";

Demo

Timer is defined as:

template <class DT = std::chrono::milliseconds,
          class ClockT = std::chrono::steady_clock>
class Timer
{
    using timep_t = typename ClockT::time_point;
    timep_t _start = ClockT::now(), _end = {};

public:
    void tick() { 
        _end = timep_t{}; 
        _start = ClockT::now(); 
    }
    
    void tock() { _end = ClockT::now(); }
    
    template <class T = DT> 
    auto duration() const { 
        gsl_Expects(_end != timep_t{} && "toc before reporting"); 
        return std::chrono::duration_cast<T>(_end - _start); 
    }
};

As Howard Hinnant pointed out, we use a duration to remain in the chrono type-system and perform operations like averaging or comparisons (e.g. here this means using std::chrono::milliseconds). When we just do IO, we use the count() or ticks of a duration (e.g. here number of milliseconds).

2 - Instrumentation

Any callable (function, function object, lambda etc.) can be instrumented for benchmarking. Say you have a function F invokable with arguments arg1,arg2, this technique results in:

cout << "F runtime=" << measure<>::duration(F, arg1, arg2).count() << "ms";

Demo

measure is defined as:

template <class TimeT  = std::chrono::milliseconds,
          class ClockT = std::chrono::steady_clock>
struct measure
{
    template<class F, class ...Args>
    static auto duration(F&& func, Args&&... args)
    {
        auto start = ClockT::now();
        std::invoke(std::forward<F>(func), std::forward<Args>(args)...);
        return std::chrono::duration_cast<TimeT>(ClockT::now()-start);
    }
};

As mentioned in (1), using the duration w/o .count() is most useful for clients that want to post-process a bunch of durations prior to I/O, e.g. average:

auto avg = (measure<>::duration(func) + measure<>::duration(func)) / 2;
std::cout << "Average run time " << avg.count() << " ms\n";

+This is why the forwarded function call.

+The complete code can be found here

+My attempt to build a benchmarking framework based on chrono is recorded here

+Old demo

Nikos Athanasiou
  • 29,616
  • 15
  • 87
  • 153
  • 2
    Nice; I have something similar in my code, but use a different interface to the class: I have a class (`code_timer`) that takes the start time (`std::chrono::system_clock::now();`) in the constructor, a method `code_timer::ellapsed` that measures the difference between a new `now()` call and the one in the constructor, and a `code_timer::reset` method that resets the start time to a new `now()` result. To measure the execution of a functor in my code, I use a free function, outside the class. This allows for measuring time from the construction of an object, to the finish of an asynch call. – utnapistim May 07 '14 at 08:19
  • @utnapistim Thnx, I've asked a review on this code, you can check it [here](http://codereview.stackexchange.com/a/48880/36839). If you have the time, there you could also elaborate a bit more on your design, it sounds great. – Nikos Athanasiou May 08 '14 at 05:01
  • Watch out for compiler optimization! I've tried something like this before and the compiler optimized the call to `func` away since it did not have side effects and the return value was not used. – Baum mit Augen Jun 28 '14 at 17:50
  • @BaummitAugen You misread the comments on that. It's when using inlined timer machinery (as in Edward's answer) that non observable behaviour gets optimized away. The function call placed as is won't be moved by the compiler. – Nikos Athanasiou Jun 28 '14 at 17:56
  • 9
    : Don't escape out of the `chrono` system until you have to (avoid use of `.count()`). Let the client call `.count()` when forced to (say for I/O, which is indeed unfortunate). The client may want to post-process a bunch of durations prior to I/O (e.g. average) and that is best done within the `chrono` system. – Howard Hinnant Jan 05 '15 at 23:06
  • Thanks, it's cool! But: code from Update2 does not compile on VS2013 with `auto` return type of `duration(..)`. I get error "C3551: expected a trailing return type". It works only if I replace `auto` with `typename TimeT`. – user1234567 Jan 27 '15 at 16:18
  • also Update2 code doesn't compile when a function has default values of parameters, in case when such arguments are omitted in a call. Can anyone explain, why? – user1234567 Jan 27 '15 at 20:15
  • 1
    @user3241228 1. VS2013 does not support auto return types (just trailing return types - it's a c++14 feature not available yet). 2. I believe [this](http://stackoverflow.com/a/14788106/2567683) is the reason but I asked a q just to be sure – Nikos Athanasiou Jan 28 '15 at 16:29
  • 1
    @user3241228 Sorry, scratch point 2; [this](http://stackoverflow.com/a/28197503/2567683) is the reason – Nikos Athanasiou Jan 28 '15 at 16:55
  • why you didn't use `high_resolution_clock` instead of `system_clock`? I understand they can be the same on many systems, but I'd expect `high_resolution_clock` can be more precise on systems where they are not the same thing – Andriy Tylychko Dec 08 '15 at 17:24
  • When measuring performance we run the test multiple times to get rid of noise. Wouldn't it be more appropriate to use `min` in your example for `duration` function usage? not a big deal of course – Andriy Tylychko Dec 08 '15 at 17:34
  • @AndyT Nice points. As you know `high_resolution_clock` may be a typedef to **either** `system_clock` or `steady_clock`. In the linked blog post, the timers "let" the user choose the clock type and use `steady_clock` as a default clock since it ticks at a uniform rate. To me this should be the default but a case should be made for "freedom of choice". Also therein I develop a "lab" environment for multiple experiment executions, so (to me) the noise removal should be done that way (in a more "in the box" fashion) ... alas that was too much to elaborate on in SO – Nikos Athanasiou Dec 08 '15 at 20:09
  • 2
    Why not `std::forward(func)`? – oliora Feb 15 '16 at 11:36
  • 3
    @oliora It's the same thing. I prefer `std::forward(func)` because it can apply to arguments of generic lambdas (`auto&& func`) where `F` is not syntactically there and it's easy to abstract in a utility macro `#define fw(arg) std::forward(arg)` which I do in my benchmark library (so it's a syntactic left over on which I don't elaborate much in the answer) – Nikos Athanasiou Feb 15 '16 at 12:56
  • I would suggest to use a steady clock only. In my code I do it with `using perf_clock = std::conditional< std::chrono::high_resolution_clock::is_steady, std::chrono::high_resolution_clock, std::chrono::steady_clock >::type;` (you can see this much better formatted in my answer below) – oliora Feb 18 '16 at 08:03
  • @oliora Yes, it's a typical type switch, I've seen it around [**a lot**](http://stackoverflow.com/a/26141694/2567683) (I'll comment on your answer about this typeswitch). In my benchmarking library I give the user the freedom to choose which clock type she prefers. I believe this is the correct way to go (even though even there I default to `steady_clock`, [check](https://github.com/picanumber/CODEine/blob/master/benchmark.h) the `benchmark` class ... I'll edit this answer, just to write that) – Nikos Athanasiou Feb 25 '16 at 23:57
  • So bad I cannot use as it's not approved by google standards. http://stackoverflow.com/questions/33653326/google-style-guide-chrono-is-an-unapproved-c11-header – SajithP Nov 23 '16 at 05:51
  • @NikosAthanasiou If possible, with your permission, I would like to use some of your ideas for my project. Unfortunately, I could not find any licensing information in the github repository and/or your blog. –  Jul 02 '17 at 17:27
  • @user1391279 As with all Stack Overflow code I believe it falls under [creativecommons](https://creativecommons.org/licenses/by-sa/2.5/) license – Nikos Athanasiou Jul 03 '17 at 18:01
  • @NikosAthanasiou Thank you for your comment. However, I was referring to the code that I found through the links in your answer. Would it be acceptable for me to use it for non-commercial purposes? Was the code released under the terms of any particular license? –  Jul 03 '17 at 19:29
  • @user1391279 A simple mention (even as a comment in the code) would do. Stay tuned though cause I'll be posting updates – Nikos Athanasiou Jul 05 '17 at 09:15
  • @NikosAthanasiou Thank you for letting me know. It would be useful if you could provide licensing information for your repository/repositories. I believe that the default license is not open source ([link](https://stackoverflow.com/questions/4007674/whats-the-default-license-of-code-published-at-github)). –  Jul 05 '17 at 09:39
  • With the second function (`duration()`) you can simplify the first function and avoid some code duplication: `return duration(std::forward(func), std::forward(args)...).count(); ` – einpoklum Apr 29 '18 at 18:09
  • How can this work with a function that has templates as arguments ? – Stavros Avramidis Feb 21 '19 at 20:01
  • Never mind I got it you need to specify the non-type argument, either with a typedef or a cast when passing it – Stavros Avramidis Feb 21 '19 at 20:26
  • @Nikos Athanasiou I have a class member function with no arguments and return type is void. Can somebody let me know how to use this mechanism? – RobinAtTech Dec 12 '19 at 23:45
  • How is `std::invoke` related here? – If_You_Say_So Apr 07 '21 at 20:28
285
#include <ctime>

void f() {
  using namespace std;
  clock_t begin = clock();

  code_to_time();

  clock_t end = clock();
  double elapsed_secs = double(end - begin) / CLOCKS_PER_SEC;
}

The time() function is only accurate to within a second, but there are CLOCKS_PER_SEC "clocks" within a second. This is an easy, portable measurement, even though it's over-simplified.

  • 142
    Be aware that `clock()` measures CPU time, not actual time elapsed (which may be much greater). – jlstrecker Jan 16 '13 at 20:50
  • 14
    When programming parallel code for clusters, this method doesn't reflect the real-world time... – Nicholas Hamilton Oct 01 '13 at 06:57
  • 3
    This seems the easiest of the ways. Would you care to update or address the comment made be @jlstrecker ? – Lorah Attkins Jan 25 '15 at 20:20
  • 6
    The solution posted above is not a good solution for many reasons. This is the correct answer - http://stackoverflow.com/questions/2962785/c-using-clock-to-measure-time-in-multi-threaded-programs/2962914#2962914 – Xofo May 25 '16 at 19:01
  • 1
    I tried this solution, and as the comments suggested, my timer ran much faster then the real world time. – RTbecard Apr 19 '17 at 16:08
  • 1
    @Xofo The answer you have linked is only correct for POSIX compliant compilers, of which for instance MSVC is not. – ajeh Apr 03 '18 at 16:32
  • I also thought this C+11 solution to measure wall time was helpful: https://stackoverflow.com/a/11062846/841052 – intagli Apr 23 '18 at 22:08
  • I have tried this but didn't worked fine for short times. This instead did worked as expected: https://stackoverflow.com/a/23616678/8849755 – user171780 May 28 '19 at 19:48
58

As I can see from your question, it looks like you want to know the elapsed time after execution of some piece of code. I guess you would be comfortable to see the results in second(s). If so, try using difftime() function as shown below. Hope this solves your problem.

#include <time.h>
#include <stdio.h>

time_t start,end;
time (&start);
.
.
.
<your code>
.
.
.
time (&end);
double dif = difftime (end,start);
printf ("Elasped time is %.2lf seconds.", dif );
Lightsout
  • 3,454
  • 2
  • 36
  • 65
Akaanthan Ccoder
  • 2,159
  • 5
  • 21
  • 37
31

Windows only: (The Linux tag was added after I posted this answer)

You can use GetTickCount() to get the number of milliseconds that have elapsed since the system was started.

long int before = GetTickCount();

// Perform time-consuming operation

long int after = GetTickCount();
RvdK
  • 19,580
  • 4
  • 64
  • 107
  • 7
    I am using it on linux. So I can't use the GetTickCount() function. – hap497 May 11 '10 at 06:26
  • 1
    already never mind ;) Thanks for updating the tag of your post – RvdK May 11 '10 at 07:42
  • It works and gives the real time, not the CPU time. I tested it by placing `SleepEx(5000,0)` in place of //Perform time-consuming operation and difference of `after` and `before` was almost 5 sec. – Ruchir Jul 31 '15 at 07:04
  • In SDL there is a portable function doing the same thing: `SDL_GetTicks()`. It measures time since SDL initialization and not system startup. – val - disappointed in SE Aug 01 '20 at 19:45
18
struct profiler
{
    std::string name;
    std::chrono::high_resolution_clock::time_point p;
    profiler(std::string const &n) :
        name(n), p(std::chrono::high_resolution_clock::now()) { }
    ~profiler()
    {
        using dura = std::chrono::duration<double>;
        auto d = std::chrono::high_resolution_clock::now() - p;
        std::cout << name << ": "
            << std::chrono::duration_cast<dura>(d).count()
            << std::endl;
    }
};

#define PROFILE_BLOCK(pbn) profiler _pfinstance(pbn)

Usage is below ::

{
    PROFILE_BLOCK("Some time");
    // your code or function
}

THis is similar to RAII in scope

NOTE this is not mine, but i thought it was relevant here

15

time(NULL) returns the number of seconds elapsed since 01/01/1970 at 00:00 (the Epoch). So the difference between the two values is the number of seconds your processing took.

int t0 = time(NULL);
doSomthing();
doSomthingLong();
int t1 = time(NULL);

printf ("time = %d secs\n", t1 - t0);

You can get finer results with getttimeofday(), which return the current time in seconds, as time() does and also in microseconds.

philant
  • 34,748
  • 11
  • 69
  • 112
13

the time(NULL) function will return the number of seconds elapsed since 01/01/1970 at 00:00. And because, that function is called at different time in your program, it will always be different Time in C++

vodkhang
  • 18,639
  • 11
  • 76
  • 110
  • I don't know why someone downvoted, but your answer isn't entirely correct. For starters it doesn't return the date time, and it won't always be different. – Matt Joiner Jun 02 '10 at 13:07
9
#include<time.h> // for clock
#include<math.h> // for fmod
#include<cstdlib> //for system
#include <stdio.h> //for delay

using namespace std;

int main()
{


   clock_t t1,t2;

   t1=clock(); // first time capture

   // Now your time spanning loop or code goes here
   // i am first trying to display time elapsed every time loop runs

   int ddays=0; // d prefix is just to say that this variable will be used for display
   int dhh=0;
   int dmm=0;
   int dss=0;

   int loopcount = 1000 ; // just for demo your loop will be different of course

   for(float count=1;count<loopcount;count++)
   {

     t2=clock(); // we get the time now

     float difference= (((float)t2)-((float)t1)); // gives the time elapsed since t1 in milliseconds

    // now get the time elapsed in seconds

    float seconds = difference/1000; // float value of seconds
    if (seconds<(60*60*24)) // a day is not over
    {
        dss = fmod(seconds,60); // the remainder is seconds to be displayed
        float minutes= seconds/60;  // the total minutes in float
        dmm= fmod(minutes,60);  // the remainder are minutes to be displayed
        float hours= minutes/60; // the total hours in float
        dhh= hours;  // the hours to be displayed
        ddays=0;
    }
    else // we have reached the counting of days
    {
        float days = seconds/(24*60*60);
        ddays = (int)(days);
        float minutes= seconds/60;  // the total minutes in float
        dmm= fmod(minutes,60);  // the rmainder are minutes to be displayed
        float hours= minutes/60; // the total hours in float
        dhh= fmod (hours,24);  // the hours to be displayed

    }

    cout<<"Count Is : "<<count<<"Time Elapsed : "<<ddays<<" Days "<<dhh<<" hrs "<<dmm<<" mins "<<dss<<" secs";


    // the actual working code here,I have just put a delay function
    delay(1000);
    system("cls");

 } // end for loop

}// end of main 
Eskay
  • 91
  • 1
  • 1
8

The values printed by your second program are seconds, and microseconds.

0 26339 = 0.026'339 s =   26339 µs
4 45025 = 4.045'025 s = 4045025 µs
Bryan
  • 11,398
  • 3
  • 53
  • 78
Didier Trosset
  • 36,376
  • 13
  • 83
  • 122
8
#include <ctime>
#include <cstdio>
#include <iostream>
#include <chrono>
#include <sys/time.h>
using namespace std;
using namespace std::chrono;

void f1()
{
  high_resolution_clock::time_point t1 = high_resolution_clock::now();
  high_resolution_clock::time_point t2 = high_resolution_clock::now();
  double dif = duration_cast<nanoseconds>( t2 - t1 ).count();
  printf ("Elasped time is %lf nanoseconds.\n", dif );
}

void f2()
{
  timespec ts1,ts2;
  clock_gettime(CLOCK_REALTIME, &ts1);
  clock_gettime(CLOCK_REALTIME, &ts2);
  double dif = double( ts2.tv_nsec - ts1.tv_nsec );
  printf ("Elasped time is %lf nanoseconds.\n", dif );
}

void f3()
{
  struct timeval t1,t0;
  gettimeofday(&t0, 0);
  gettimeofday(&t1, 0);
  double dif = double( (t1.tv_usec-t0.tv_usec)*1000);
  printf ("Elasped time is %lf nanoseconds.\n", dif );
}
void f4()
{
  high_resolution_clock::time_point t1 , t2;
  double diff = 0;
  t1 = high_resolution_clock::now() ;
  for(int i = 1; i <= 10 ; i++)
  {
    t2 = high_resolution_clock::now() ;
    diff+= duration_cast<nanoseconds>( t2 - t1 ).count();
    t1 = t2;
  }
  printf ("high_resolution_clock:: Elasped time is %lf nanoseconds.\n", diff/10 );
}

void f5()
{
  timespec ts1,ts2;
  double diff = 0;
  clock_gettime(CLOCK_REALTIME, &ts1);
  for(int i = 1; i <= 10 ; i++)
  {
    clock_gettime(CLOCK_REALTIME, &ts2);
    diff+= double( ts2.tv_nsec - ts1.tv_nsec );
    ts1 = ts2;
  }
  printf ("clock_gettime:: Elasped time is %lf nanoseconds.\n", diff/10 );
}

void f6()
{
  struct timeval t1,t2;
  double diff = 0;
  gettimeofday(&t1, 0);
  for(int i = 1; i <= 10 ; i++)
  {
    gettimeofday(&t2, 0);
    diff+= double( (t2.tv_usec-t1.tv_usec)*1000);
    t1 = t2;
  }
  printf ("gettimeofday:: Elasped time is %lf nanoseconds.\n", diff/10 );
}

int main()
{
  //  f1();
  //  f2();
  //  f3();
  f6();
  f4();
  f5();
  return 0;
}
Akanksha Gupta
  • 229
  • 3
  • 5
5

C++ std::chrono has a clear benefit of being cross-platform. However, it also introduces a significant overhead compared to POSIX clock_gettime(). On my Linux box all std::chrono::xxx_clock::now() flavors perform roughly the same:

std::chrono::system_clock::now()
std::chrono::steady_clock::now()
std::chrono::high_resolution_clock::now()

Though POSIX clock_gettime(CLOCK_MONOTONIC, &time) should be same as steady_clock::now() but it is more than x3 times faster!

Here is my test, for completeness.

#include <stdio.h>
#include <chrono>
#include <ctime>

void print_timediff(const char* prefix, const struct timespec& start, const 
struct timespec& end)
{
    double milliseconds = end.tv_nsec >= start.tv_nsec
                        ? (end.tv_nsec - start.tv_nsec) / 1e6 + (end.tv_sec - start.tv_sec) * 1e3
                        : (start.tv_nsec - end.tv_nsec) / 1e6 + (end.tv_sec - start.tv_sec - 1) * 1e3;
    printf("%s: %lf milliseconds\n", prefix, milliseconds);
}

int main()
{
    int i, n = 1000000;
    struct timespec start, end;

    // Test stopwatch
    clock_gettime(CLOCK_MONOTONIC, &start);
    for (i = 0; i < n; ++i) {
        struct timespec dummy;
        clock_gettime(CLOCK_MONOTONIC, &dummy);
    }
    clock_gettime(CLOCK_MONOTONIC, &end);
    print_timediff("clock_gettime", start, end);

    // Test chrono system_clock
    clock_gettime(CLOCK_MONOTONIC, &start);
    for (i = 0; i < n; ++i)
        auto dummy = std::chrono::system_clock::now();
    clock_gettime(CLOCK_MONOTONIC, &end);
    print_timediff("chrono::system_clock::now", start, end);

    // Test chrono steady_clock
    clock_gettime(CLOCK_MONOTONIC, &start);
    for (i = 0; i < n; ++i)
        auto dummy = std::chrono::steady_clock::now();
    clock_gettime(CLOCK_MONOTONIC, &end);
    print_timediff("chrono::steady_clock::now", start, end);

    // Test chrono high_resolution_clock
    clock_gettime(CLOCK_MONOTONIC, &start);
    for (i = 0; i < n; ++i)
        auto dummy = std::chrono::high_resolution_clock::now();
    clock_gettime(CLOCK_MONOTONIC, &end);
    print_timediff("chrono::high_resolution_clock::now", start, end);

    return 0;
}

And this is the output I get when compiled with gcc7.2 -O3:

clock_gettime: 24.484926 milliseconds
chrono::system_clock::now: 85.142108 milliseconds
chrono::steady_clock::now: 87.295347 milliseconds
chrono::high_resolution_clock::now: 84.437838 milliseconds
Gluttton
  • 5,739
  • 3
  • 31
  • 58
Alexey Polonsky
  • 1,141
  • 11
  • 12
3

The time(NULL) function call will return the number of seconds elapsed since epoc: January 1 1970. Perhaps what you mean to do is take the difference between two timestamps:

size_t start = time(NULL);
doSomthing();
doSomthingLong();

printf ("**MyProgram::time elapsed= %lds\n", time(NULL) - start);
wilhelmtell
  • 57,473
  • 20
  • 96
  • 131
3

On linux, clock_gettime() is one of the good choices. You must link real time library(-lrt).

#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <time.h>

#define BILLION  1000000000L;

int main( int argc, char **argv )
  {
    struct timespec start, stop;
    double accum;

    if( clock_gettime( CLOCK_REALTIME, &start) == -1 ) {
      perror( "clock gettime" );
      exit( EXIT_FAILURE );
    }

    system( argv[1] );

    if( clock_gettime( CLOCK_REALTIME, &stop) == -1 ) {
      perror( "clock gettime" );
      exit( EXIT_FAILURE );
    }

    accum = ( stop.tv_sec - start.tv_sec )
          + ( stop.tv_nsec - start.tv_nsec )
            / BILLION;
    printf( "%lf\n", accum );
    return( EXIT_SUCCESS );
  }
cloudrain21
  • 649
  • 5
  • 17
3

As others have already noted, the time() function in the C standard library does not have a resolution better than one second. The only fully portable C function that may provide better resolution appears to be clock(), but that measures processor time rather than wallclock time. If one is content to limit oneself to POSIX platforms (e.g. Linux), then the clock_gettime() function is a good choice.

Since C++11, there are much better timing facilities available that offer better resolution in a form that should be very portable across different compilers and operating systems. Similarly, the boost::datetime library provides good high-resolution timing classes that should be highly portable.

One challenge in using any of these facilities is the time-delay introduced by querying the system clock. From experimenting with clock_gettime(), boost::datetime and std::chrono, this delay can easily be a matter of microseconds. So, when measuring the duration of any part of your code, you need to allow for there being a measurement error of around this size, or try to correct for that zero-error in some way. Ideally, you may well want to gather multiple measurements of the time taken by your function, and compute the average, or maximum/minimum time taken across many runs.

To help with all these portability and statistics-gathering issues, I've been developing the cxx-rtimers library available on Github which tries to provide a simple API for timing blocks of C++ code, computing zero errors, and reporting stats from multiple timers embedded in your code. If you have a C++11 compiler, you simply #include <rtimers/cxx11.hpp>, and use something like:

void expensiveFunction() {
    static rtimers::cxx11::DefaultTimer timer("expensiveFunc");
    auto scopedStartStop = timer.scopedStart();
    // Do something costly...
}

On program exit, you'll get a summary of timing stats written to std::cerr such as:

Timer(expensiveFunc): <t> = 6.65289us, std = 3.91685us, 3.842us <= t <= 63.257us (n=731)

which shows the mean time, its standard-deviation, the upper and lower limits, and the number of times this function was called.

If you want to use Linux-specific timing functions, you can #include <rtimers/posix.hpp>, or if you have the Boost libraries but an older C++ compiler, you can #include <rtimers/boost.hpp>. There are also versions of these timer classes that can gather statistical timing information from across multiple threads. There are also methods that allow you to estimate the zero-error associated with two immediately consecutive queries of the system clock.

rwp
  • 1,786
  • 2
  • 17
  • 28
2

Internally the function will access the system's clock, which is why it returns different values each time you call it. In general with non-functional languages there can be many side effects and hidden state in functions which you can't see just by looking at the function's name and arguments.

Mike Weller
  • 45,401
  • 15
  • 131
  • 151
2

From what is see, tv_sec stores the seconds elapsed while tv_usec stored the microseconds elapsed separately. And they aren't the conversions of each other. Hence, they must be changed to proper unit and added to get the total time elapsed.

struct timeval startTV, endTV;

gettimeofday(&startTV, NULL); 

doSomething();
doSomethingLong();

gettimeofday(&endTV, NULL); 

printf("**time taken in microseconds = %ld\n",
    (endTV.tv_sec * 1e6 + endTV.tv_usec - (startTV.tv_sec * 1e6 + startTV.tv_usec))
    );
2

I needed to measure the execution time of individual functions within a library. I didn't want to have to wrap every call of every function with a time measuring function because its ugly and deepens the call stack. I also didn't want to put timer code at the top and bottom of every function because it makes a mess when the function can exit early or throw exceptions for example. So what I ended up doing was making a timer that uses its own lifetime to measure time.

In this way I can measure the wall-time a block of code took by just instantiating one of these objects at the beginning of the code block in question (function or any scope really) and then allowing the instances destructor to measure the time elapsed since construction when the instance goes out of scope. You can find the full example here but the struct is extremely simple:

template <typename clock_t = std::chrono::steady_clock>
struct scoped_timer {
  using duration_t = typename clock_t::duration;
  const std::function<void(const duration_t&)> callback;
  const std::chrono::time_point<clock_t> start;

  scoped_timer(const std::function<void(const duration_t&)>& finished_callback) :
      callback(finished_callback), start(clock_t::now()) { }
  scoped_timer(std::function<void(const duration_t&)>&& finished_callback) :
      callback(finished_callback), start(clock_t::now()) { }
  ~scoped_timer() { callback(clock_t::now() - start); }
};

The struct will call you back on the provided functor when it goes out of scope so you can do something with the timing information (print it or store it or whatever). If you need to do something even more complex you could even use std::bind with std::placeholders to callback functions with more arguments.

Here's a quick example of using it:

void test(bool should_throw) {
  scoped_timer<> t([](const scoped_timer<>::duration_t& elapsed) {
    auto e = std::chrono::duration_cast<std::chrono::duration<double, std::milli>>(elapsed).count();
    std::cout << "took " << e << "ms" << std::endl;
  });

  std::this_thread::sleep_for(std::chrono::seconds(1));

  if (should_throw)
    throw nullptr;

  std::this_thread::sleep_for(std::chrono::seconds(1));
}

If you want to be more deliberate, you can also use new and delete to explicitly start and stop the timer without relying on scoping to do it for you.

Kevin Kreiser
  • 594
  • 5
  • 11
1

They are they same because your doSomething function happens faster than the granularity of the timer. Try:

printf ("**MyProgram::before time= %ld\n", time(NULL));

for(i = 0; i < 1000; ++i) {
    doSomthing();
    doSomthingLong();
}

printf ("**MyProgram::after time= %ld\n", time(NULL));
kibibu
  • 6,115
  • 1
  • 35
  • 41
1

The reason both values are the same is because your long procedure doesn't take that long - less than one second. You can try just adding a long loop (for (int i = 0; i < 100000000; i++) ; ) at the end of the function to make sure this is the issue, then we can go from there...

In case the above turns out to be true, you will need to find a different system function (I understand you work on linux, so I can't help you with the function name) to measure time more accurately. I am sure there is a function simular to GetTickCount() in linux, you just need to find it.

David Božjak
  • 16,887
  • 18
  • 67
  • 98
1

I usually use the following:

#include <chrono>
#include <type_traits>

using perf_clock = std::conditional<
    std::chrono::high_resolution_clock::is_steady,
    std::chrono::high_resolution_clock,
    std::chrono::steady_clock
>::type;

using floating_seconds = std::chrono::duration<double>;

template<class F, class... Args>
floating_seconds run_test(Func&& func, Args&&... args)
{
   const auto t0 = perf_clock::now();
   std::forward<Func>(func)(std::forward<Args>(args)...);
   return floating_seconds(perf_clock::now() - t0);
} 

It's the same as @nikos-athanasiou proposed except that I avoid using of a non-steady clock and use floating number of seconds as a duration.

oliora
  • 839
  • 5
  • 12
  • 2
    On this [typeswitch](http://stackoverflow.com/a/26141694/2567683) : Typically `high_resolution_clock` is a typedef for either `system_clock` or `steady_clock`. So to trace that `std::conditional` if the `is_steady` part is true, then you pick the `high_resolution_clock` which is (a typedef to) the `steady_clock`. If it's false then you pick the `steady_clock` again. Just use `steady_clock` from the beginning ... – Nikos Athanasiou Feb 26 '16 at 00:02
  • @nikos-athanasiou I completely agree with the comment from 5gon12eder that "typical" case is not required by the standard so some STL may be implemented in a different way. I prefer to have my code to be more generic and not related on implementation details. – oliora Feb 28 '16 at 10:28
  • It's not required but explicitly stated in __20.12.7.3__ : `high_resolution_clock may be a synonym for system_clock or steady_clock`. The reason is this : `high_resolution_clock` represents clocks with the shortest tick period, so whatever the implementation, it has two choices, being steady or not. Whatever choice we make, saying that the implementation will differ from the other two clocks is like saying we have a better implementation for a steady (or not) clock that we choose not to use (for steady or not clocks). Knowing how is good, knowing why is better – Nikos Athanasiou Feb 28 '16 at 11:55
  • @nikos-athanasiou I would prefer to be 100% safe especially when this cost me no runtime overhead and undetectable compile time overhead. You may rely on "may" and assamptions if you want. – oliora Feb 29 '16 at 17:50
  • au contraire my friend, it's you that relies on "may", but suit yourself. If you want to be 100% sure and keep writing this then, you should also find a way, for you and users of your code, to avoid non-portably mixing time points of different clocks (if ever this type switch acquires a meaning, it'll behave differently on different platforms). Have fun! – Nikos Athanasiou Feb 29 '16 at 20:43
  • @NikosAthanasiou as you can see, no time_point escapes the function. The return type is a floating seconds duration which is clock independent. So no user code will depend on actual clock that chosen by the condition. To be honest I don't see any point to continue this discussion. – oliora Feb 29 '16 at 22:06
1

Matlab flavored!

tic starts a stopwatch timer to measure performance. The function records the internal time at execution of the tic command. Display the elapsed time with the toc function.

#include <iostream>
#include <ctime>
#include <thread>
using namespace std;

clock_t START_TIMER;

clock_t tic()
{
    return START_TIMER = clock();
}

void toc(clock_t start = START_TIMER)
{
    cout
        << "Elapsed time: "
        << (clock() - start) / (double)CLOCKS_PER_SEC << "s"
        << endl;
}

int main()
{
    tic();
    this_thread::sleep_for(2s);
    toc();

    return 0;
}
Yas
  • 4,957
  • 2
  • 41
  • 24
  • This is not a good approach. Because it uses a global variable by default, it’s not reentrant. An inner function using the same interface could screw up the timing of the outer interface. – Sneftel Apr 13 '21 at 18:24
1

Here's a simple class that will print the duration between the time it got in and out of scope in the specified duration unit:

#include <chrono>
#include <iostream> 

template <typename T>
class Benchmark
{
   public:
    Benchmark(std::string name) : start(std::chrono::steady_clock::now()), name(name) {}
    ~Benchmark()
    {
        auto end = std::chrono::steady_clock::now();
        T duration = std::chrono::duration_cast<T>(end - start);
        std::cout << "Bench \"" << name << "\" took: " << duration.count() << " units" << std::endl;
    }

   private:
    std::string name;
    std::chrono::time_point<std::chrono::steady_clock> start;

};
int main()
{
  Benchmark<std::chrono::nanoseconds> bench("for loop");
  for(int i = 0; i < 1001000; i++){}
}

Example usage:

int main()
{
  Benchmark<std::chrono::nanoseconds> bench("for loop");
  for(int i = 0; i < 100000; i++){}
}

Outputs:

Bench "for loop" took: 230656 units
SickGuy
  • 34
  • 1
  • 4
0

In answer to OP's three specific questions.

"What I don't understand is why the values in the before and after are the same?"

The first question and sample code shows that time() has a resolution of 1 second, so the answer has to be that the two functions execute in less than 1 second. But occasionally it will (apparently illogically) inform 1 second if the two timer marks straddle a one second boundary.

The next example uses gettimeofday() which fills this struct

struct timeval {
    time_t      tv_sec;     /* seconds */
    suseconds_t tv_usec;    /* microseconds */
};

and the second question asks: "How do I read a result of **time taken = 0 26339? Does that mean 26,339 nanoseconds = 26.3 msec?"

My second answer is the time taken is 0 seconds and 26339 microseconds, that is 0.026339 seconds, which bears out the first example executing in less than 1 second.

The third question asks: "What about **time taken = 4 45025, does that mean 4 seconds and 25 msec?"

My third answer is the time taken is 4 seconds and 45025 microseconds, that is 4.045025 seconds, which shows that OP has altered the tasks performed by the two functions which he previously timed.

Weather Vane
  • 33,872
  • 7
  • 36
  • 56
-1
#include <ctime>
#include <functional>

using namespace std;

void f() {
  clock_t begin = clock();

  // ...code to measure time...

  clock_t end = clock();

  function<double(double, double)> convtime = [](clock_t begin, clock_t end)
  {
     return double(end - begin) / CLOCKS_PER_SEC;
  };

  printf("Elapsed time: %.2g sec\n", convtime(begin, end));

}

Similar example to one available here, only with additional conversion function + print out.

TarmoPikaro
  • 4,723
  • 2
  • 50
  • 62
-1

I have created a class to automatically measure elapsed time, Please check the code (c++11) in this link: https://github.com/sonnt174/Common/blob/master/time_measure.h

Example of how to use class TimeMeasure:

void test_time_measure(std::vector<int> arr) {
  TimeMeasure<chrono::microseconds> time_mea;  // create time measure obj
  std::sort(begin(arr), end(arr));
}