0

I am trying to view what the run-time on my code is. The code is my attempt at Project Euler Problem 5. When I try to output the run time it gives 0ns.

#define MAX_DIVISOR 20

bool isDivisible(long, int);

int main() {

auto begin = std::chrono::high_resolution_clock::now();

int d = 2;
long inc = 1;
long i = 1;
while (d < (MAX_DIVISOR + 1)) {
    if ((i % d) == 0) {
        inc = i;
        i = inc;
        d++;
    }
    else {
        i += inc;
    }
}
auto end = std::chrono::high_resolution_clock::now();

printf("Run time: %llu ns\n", (std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin).count())); // Gives 0 here.
std::cout << "ANS: " << i << std::endl;

system("pause");
return 0;

}

Martijn Pieters
  • 1,048,767
  • 296
  • 4,058
  • 3,343
MichaelMitchell
  • 1,069
  • 2
  • 13
  • 38

2 Answers2

3

The timing resolulution of std::chrono::high_resolution_clock::now() is system dependent.

You can find out an order of magnitude with the small piece of code here (edit: here you have a more accurate version):

chrono::nanoseconds mn(1000000000);  // asuming the resolution is higher
for (int i = 0; i < 5; i++) {
    using namespace std::chrono; 
    nanoseconds dt; 
    long d = 1000 * pow(10, i);
    for (long e = 0; e < 10; e++) {
        long j = d + e*pow(10, i)*100;
        cout << j << " ";
        auto begin = high_resolution_clock::now();
        while (j>0)
            k = ((j-- << 2) + 1) % (rand() + 100);
        auto end = high_resolution_clock::now();
        dt = duration_cast<nanoseconds>(end - begin);
        cout << dt.count() << "ns = " 
             << duration_cast<milliseconds>(dt).count() << " ms" << endl;
        if (dt > nanoseconds(0) && dt < mn)
            mn = dt;
    }
}
cout << "Minimum resolution observed: " << mn.count() << "ns\n";

where k is a global volatile long k; in order to avoid optimizer to interfere too much.

Under windows, I obtain here 15ms. Then you have platform specific alternatives. For windows, there is a high performance cloeck that enables you to measure timebelow 10µs range (see here http://msdn.microsoft.com/en-us/library/windows/desktop/dn553408%28v=vs.85%29.aspx) but still not in the nanosecond range.

If you want to time your code very accurately, you could reexecute it a big loop, and dividint the total time by the number of iterations.

Christophe
  • 68,716
  • 7
  • 72
  • 138
1

Estimation you are going to do is not precise, better approach is to measure CPU time consumption of you program (because other processes are also running concurrently with you process, so time that you are trying to measure can be greatly affected if CPU intensitive tasks are running in parallel with you process).
So my advise use already implemented profilers if you want to estimate your code performance.

Considering your task, OS if doesn`t provide needed precision for time, you need to increase total time your are trying to estimate, the esiest way - run program n times & calculate the avarage, this method provides such advantage that by avareging - you can eleminate errors that arose from CPU intensitive tasks running concurrently with you process.

Here is code snippet of how I see the possible implementation:

#include <iostream>
using namespace std;

#define MAX_DIVISOR 20

bool isDivisible(long, int);

void doRoutine()
{
  int d = 2;
  long inc = 1;
  long i = 1;
  while (d < (MAX_DIVISOR + 1)) 
  {
        if (isDivisible(i, d)) 
        {
            inc = i;
            i = inc;
            d++;
        }
        else 
        {
            i += inc;
        }
   }
}

int main() {

auto begin = std::chrono::high_resolution_clock::now();
const int nOfTrials = 1000000;

for (int i = 0; i < nOfTrials; ++i)
    doRoutine();

auto end = std::chrono::high_resolution_clock::now();

printf("Run time: %llu ns\n", (std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin).count()/ nOfTrials)); // Gives 0 here.
std::cout << "ANS: " << i << std::endl;

system("pause");
return 0;
spin_eight
  • 3,925
  • 10
  • 39
  • 61
  • I tried this doing 1 million times in my loop. I change in time by 1 million and I got 504 ns. Is this a reasonable time? – MichaelMitchell Jul 06 '14 at 11:15
  • @MichaelMitchell I don`t know, because you need some standard to compare with. If you wan`t to estimate whether algorithm you have implemented is efficient you should change you approach and count number of operations in relation to input size (see big O notation). Because time - varies with computer on which you run it, but number of operations is computer independent. – spin_eight Jul 06 '14 at 11:20
  • 1
    @MichaelMitchell yes these are reasonable figures nowadays (I assume 504 nanosecond for 1 interation not for the million). – Christophe Jul 06 '14 at 12:23
  • @Christophe Yes 500ns for the single iteration. It just seems way too fast. – MichaelMitchell Jul 06 '14 at 18:33
  • 1
    @MichaelMitchell The code generated on my compiler is 17 basic machine instructions for the loop, and it loops 93 times. That gives approx 3 nanoseconds in average per basic CPU instruction, which seems fair for CPUS working in the giga-instruction-per-second range. – Christophe Jul 06 '14 at 18:52
  • @Christophe Thanks. Do you think this is the most efficient way of doing this problem? – MichaelMitchell Jul 06 '14 at 19:38
  • 1
    @MichaelMitchell part from removing `i = inc;` (which the optimizer did), it's dificult to do it better. This is very elegent and uses only additions and modulo. Alternatives would require prime factorisation with tables, indexes, and lot more multiplications/divisions. – Christophe Jul 06 '14 at 21:35