2

I am trying to learn how to use clock(). Here is a piece of code that i have

int main()
{
    srand(time(NULL));
    clock_t t;
    int num[100000];
    int total=0;
    t=clock();
    cout<<"tick:"<<t<<endl;
    for (int i=0;i<100000;i++)
    {
        num[i]=rand();
        //cout<<num[i]<<endl;
    }
    for(int j=0;j<100000;j++)
    {
        total+=num[j];
    }
    t=clock();
    cout<<"total:"<<total<<endl;
    cout<<"ticks after loop:"<<t<<endl;
    //std::cout<<"The number of ticks for the loop to caluclate total:"<<t<<"\t time is seconds:"<<((float)t)/CLOCKS_PER_SEC<<endl;
    cin.get();
}

The result that i get is in below image. I don't understand why the tick count are same even though there are two big loops in between.

result

Gautam
  • 375
  • 2
  • 6
  • 23
  • On which operating system and which machine (processor, motherboard). Time has limited accuracy... And on Linux `clock` has been slightly improved in very recent `libc` – Basile Starynkevitch Apr 27 '14 at 06:55
  • @BasileStarynkevitch I am using Windows 7. Processor is intel T4300 dual core. I do not know about the motherboard. Even if time not accurate i was expecting to see a difference in the count. – Gautam Apr 27 '14 at 06:58
  • @BasileStarynkevitch I will try with a larger loop and get back. Thanks for answer – Gautam Apr 27 '14 at 07:00
  • @Gautam - try a more precise timer, for example Qt's `QElapsedTimer` has nanosecond resolution. – dtech Apr 27 '14 at 07:10

3 Answers3

3

The clock() function has a finite resolution. On VC2013 it is once per millisec. (Your system may vary). If you call clock() twice in the same millisecond (or whatever) you get the same value.

in <ctime> there is a constant CLOCKS_PER_SEC which tells you how many ticks per second. For VC2012 that is 1000.

** Update 1 **

You said you're in Windows. Here's some Win-specific code that gets higher resolution time. If I get time I'll try to do something portable.

#include <iostream>
#include <vector>
#include <ctime>
#include <Windows.h>

int main() 
{
    ::srand(::time(NULL));

    FILETIME ftStart, ftEnd;
    const int nMax = 1000*1000;
    std::vector<unsigned> vBuff(nMax);
    int nTotal=0;

    ::GetSystemTimeAsFileTime(&ftStart);
    for (int i=0;i<nMax;i++)
    {
        vBuff[i]=rand();
    }
    for(int j=0;j<nMax;j++)
    {
        nTotal+=vBuff[j];
    }
    ::GetSystemTimeAsFileTime(&ftEnd);

    double dElapsed = (ftEnd.dwLowDateTime - ftStart.dwLowDateTime) / 10000.0;
    std::cout << "Elapsed time = " << dElapsed << " millisec\n";

    return 0;
}

** Update 2 ** Ok, here's the portable version.

#include <iostream>
#include <vector>
#include <ctime>
#include <chrono>

// abbreviations to avoid long lines
typedef std::chrono::high_resolution_clock Clock_t;
typedef std::chrono::time_point<Clock_t> TimePoint_t;
typedef std::chrono::microseconds usec;

uint64_t ToUsec(Clock_t::duration t)
{
    return std::chrono::duration_cast<usec>(t).count();
}

int main() 
{
    ::srand(static_cast<unsigned>(::time(nullptr)));

    const int nMax = 1000*1000;
    std::vector<unsigned> vBuff(nMax);
    int nTotal=0;

    TimePoint_t tStart(Clock_t::now());
    for (int i=0;i<nMax;i++)
    {
        vBuff[i]=rand();
    }
    for(int j=0;j<nMax;j++)
    {
        nTotal+=vBuff[j];
    }
    TimePoint_t tEnd(Clock_t::now());
    uint64_t nMicroSec = ToUsec(tEnd - tStart);

    std::cout << "Elapsed time = " 
              << nMicroSec / 1000.0
              << " millisec\n";

    return 0;
}
Michael J
  • 7,631
  • 2
  • 24
  • 30
  • 1
    While `CLOCKS_PER_SEC` does indeed tell the number of ticks per second, it does NOT tell how often the result is update - for example the `CLOCK_PER_SEC` may be 1000000, but each time the clock is updated, it's by 100, 10000 or 4981 "ticks". And this is the key in this case, the clock is obviously not being updated in the time that it takes to do 100k calls to rand and 100k adds. – Mats Petersson Apr 27 '14 at 08:39
  • @Michael J Thanks for the code. I was able to test the first code you have given me. It works. The second one, i cannot run it right now as i do not have a C++11 compiler. – Gautam Apr 27 '14 at 11:48
2

Strong suggestion:

Run the same benchmark, but try multiple, alternative methods. For example:

Etc.

The problem with (Posix-compliant) "clock()" is that it isn't necessarily accurate enough for meanintful benchmarks, dependent on your compiler library/platform.

Community
  • 1
  • 1
FoggyDay
  • 11,962
  • 4
  • 34
  • 48
0

Time has limited accuracy (perhaps only several milliseconds)... And on Linux clock has been slightly improved in very recent libc. At last, your loop is too small (a typical elementary C instruction runs in less than a few nanoseconds). Make it bigger, e.g. do it a billion times. But then you should declare static int num[1000000000]; to avoid eating too much stack space.

Basile Starynkevitch
  • 223,805
  • 18
  • 296
  • 547