3

Hi so I am trying to do a program that sums 20 consecutive numbers and calculates the time that it took to do so... the problem is that when I run the program the time is always 0... any ideas?

this is what I have so far... thanks!

#include <iostream>
#include <time.h>
using namespace std;

int main()
{
    int finish = 20;
    int start = 1;
    int result = 0;

    double msecs;
    clock_t init, end;

    init = clock();

    for (int i = start; i <= finish; i++)
    {
        result += i;
    }


    end = clock();

    cout << ((float)(end - init)) *1000 / (CLOCKS_PER_SEC);

    system ("PAUSE");

    return 0;
} 
chacham15
  • 13,719
  • 26
  • 104
  • 207
denise1633
  • 31
  • 1
  • 3
  • 5
    Just my two cents - the resolution of the clock() call is low enough, that adding 20 numbers takes less than one "unit" of clock()'s own granularity. C++11 has high-precision clock APIs, which you can try and use. Or maybe, just add a million numbers instead of 20 and see if that changes things. Also, are you compiling debug or release? In a release build, it's entirely possible the entire for loop is being discarded by the optimizer as it's not being used anywhere else. If that is the case, try return result; instead of return 0; – Enrico Granata Apr 04 '14 at 20:42
  • 2
    Check the value of CLOCKS_PER_SEC to find out what is the resolution of clock() –  Apr 04 '14 at 20:44
  • 2
    As @EnricoGranata said, this can be very very fast; especially depending on the optimizations by the compiler. The compiler can notice that result has local scope and isnt used. Consequently it can optimize the whole thing out. Even without that, it can unroll the loop and do the addition at compilation time. Lastly, even without optimizations, addition is usually the fastest operation that a CPU can perform, often in a single cycle. If that is the case, with 20 (lets even say 50) cycles, that translates to 50 ns which no clock api will measure properly. – chacham15 Apr 04 '14 at 20:47
  • 1
    `CLOCKS_PER_SEC` usually has a granularity of 1 to 10ms (100Hz to 1000Hz), which isn't even close to enough resolution to detect an operation that takes less than 1us on a modern processor. – nneonneo Apr 04 '14 at 21:33
  • 1
    Even if you `return result;` as @Enrico suggests, a clever optimizing compiler could still precompute the sum and replacing the entire loop with a single load instruction. This is why profiling with overly simplistic programs can be completely a bitch. – Nicu Stiurca Apr 04 '14 at 21:55
  • @SchighSchagh indeed it could, still worth a shot though.. if one wanted to, one could fprintf 20 characters to /dev/null, and use that call's return value instead of the literal 20.. – Enrico Granata Apr 04 '14 at 22:33
  • I see now, I tried it with a bigger number and it actually returned something... @EnricoGranata – denise1633 Apr 04 '14 at 23:35
  • thank you all for the responses! very helpful =) – denise1633 Apr 04 '14 at 23:36

2 Answers2

2

No matter what technique you use for timing they all have some precision. This simply executes so fast that your timer isn't registering any time as having passed.

Aside #1: Use high_resolution_clock - maybe that will register something non-zero, probably not.

Aside #2: Don't name your variable null, in C++ that implies 0 or a null pointer

David
  • 27,652
  • 18
  • 89
  • 138
1

You can try this...but you might need version C++11.

This can get down to 0.000001 seconds.

#include <iostream>
#include <ctime>
#include <ratio>
#include <chrono>

//using namespace std;

int main()
{
    using namespace std::chrono;
    high_resolution_clock::time_point t1 = high_resolution_clock::now();

    int finish = 20;
    int start = 1;

    for (int i = start; i <= finish; i++)
    {
        result += i;
    }

    high_resolution_clock::time_point t2 = high_resolution_clock::now();
    duration<double> time_span = duration_cast<duration<double>>(t2 - t1);
    cout << time_span.count() << " seconds" << endl;
    end = clock();

    system ("PAUSE");

    return 0;
}
πάντα ῥεῖ
  • 1
  • 13
  • 116
  • 190
user3437460
  • 17,253
  • 15
  • 58
  • 106
  • It only has that resolution on linux. On VC++ it may have an absurdly useless resolution. – Mooing Duck Apr 04 '14 at 21:47
  • @MooingDuck I'm on Windows 7 and I actually have nanosecond resolution, which is even better than the microsecond resolution this answer suggests. The `high_resolution_clock` is implementation-defined and to know what the precision is on your specific system/compiler combination one could check `high_resolution_clock::period::den`. – ParvusM Apr 05 '14 at 20:57
  • 1
    @ParvusM: Does it have nanosecond _units_, or does it actually have nanosecond _resolution_? In VC++ 2012, the data has nanosecond units, but is only updated every 15ms or so. Call it in a tight loop and see how much it changes. See this bug report: https://connect.microsoft.com/VisualStudio/feedback/details/719443/ – Mooing Duck Apr 05 '14 at 21:11
  • @MooingDuck Resolution, but I'm using GCC 4.8.2, not VC++. It could very well be that the current VC++ on Windows is not having that resolution with the C++11 chrono clocks. In that case one could fall back on the Windows `QueryPerformanceCounter` which does have at least that resolution. However, I simply wanted to say that this resolution is not 'only on Linux'. – ParvusM Apr 05 '14 at 22:53
  • @ParvusM: Yeah, GCC has good resolution. That's why I never mentioned any operating system, and only made claims about VC++. The problem isn't the OS, the problem was an oversight in VC++'s standard library headers. – Mooing Duck Apr 05 '14 at 23:34
  • @MooingDuck Well... your first comment here did start with "It only has that resolution on linux." =P – ParvusM Apr 06 '14 at 00:10