0

I was given the following HomeWork assignment,

Write a program to test on your computer how long it takes to do nlogn, n2, n5, 2n, and n! additions for n=5, 10, 15, 20.

I have written a piece of code but all the time I am getting the time of execution 0. Can anyone help me out with it? Thanks

#include <iostream>
#include <cmath>
#include <ctime>
using namespace std;
int main()
{
 float n=20;
 time_t start, end, diff;
  start = time (NULL);
  cout<<(n*log(n))*(n*n)*(pow(n,5))*(pow(2,n))<<endl;
  end= time(NULL);
 diff = difftime (end,start);
 cout <<diff<<endl;
 return 0;
}
Puppy
  • 144,682
  • 38
  • 256
  • 465
  • It seems you'd rather have to perform an addition operation that many times. Although it might be hard to measure the time for 2432902008176640000 additions. - It seems the point of the exercise might be to get an idea of different algorithmic complexities, not to time an arbitrary operation. – UncleBens Aug 07 '11 at 18:49
  • I see that you have absolutely no idea what C is. Let me give you a hint: It doesn't have ``. – Puppy Aug 07 '11 at 18:52
  • Yes, it meant to explain the big O . –  Aug 07 '11 at 18:52
  • What a diabolical assignment! Why write a computer program to perform n, n^2, n! operations and time them, for some values of n? Why not just evaluate n, n^2, n! etc. on a calculator. Or, it you have to write a program, write a program that prints the values of n, n^2, n! Somebody please sack the professor! – David Heffernan Aug 07 '11 at 19:23
  • `difftime()` returns a double, not a time_t. – Keith Thompson Aug 07 '11 at 20:18
  • @David: What would be the point of evaluating those expressions on a calculator *in a programming class*? Class assignments aren't always intended to produce useful code, they're intended to get the student to understand the concept. In this case, the students get to understand how to time the execution of code without the clutter (and opportunity for error) of writing complex code to be timed. (In practice, it probably makes more sense to use a profiling tool, but using profilers is system-specific knowledge.) – Keith Thompson Aug 07 '11 at 20:21
  • @Keith The assignment is about understanding big O, apparently. No need to write programs to understands that. All one needs to do is to be able to count. – David Heffernan Aug 07 '11 at 21:48
  • @David: But I think that actually *seeing* how long the program takes will drive the point home much more memorably than just computing the numbers. – Keith Thompson Aug 07 '11 at 21:57
  • @keith Really, maths is important and if people can't understand linear, exponential or factorial growth by plotting the graphs then there's little point continuing with theoretical computer science. torial growth by plotting the graphs then there's little point continuing with theoretical computer science. – David Heffernan Aug 07 '11 at 22:00
  • @keith This is question a perfect illustration. It's hard to time 20 additions. Plotting a graph of these functions would get the message across much simpler. And if one needs a graphic example of what happens when you get your algo wrong, just try naive recursive fib vs iterative fib. – David Heffernan Aug 07 '11 at 22:06

5 Answers5

5

better than time() with second-precision is to use a milliseconds precision. a portable way is e.g.

int main(){
clock_t start, end;
double msecs;

start = clock();
/* any stuff here ... */
end = clock();
msecs = ((double) (end - start)) * 1000 / CLOCKS_PER_SEC;
return 0;
}
user411313
  • 3,930
  • 19
  • 16
3

Execute each calculation thousands of times, in a loop, so that you can overcome the low resolution of time and obtain meaningful results. Remember to divide by the number of iterations when reporting results.

This is not particularly accurate but that probably does not matter for this assignment.

David Heffernan
  • 601,492
  • 42
  • 1,072
  • 1,490
2

At least on Unix-like systems, time() only gives you 1-second granularity, so it's not useful for timing things that take a very short amount of time (unless you execute them many times in a loop). Take a look at the gettimeofday() function, which gives you the current time with microsecond resolution. Or consider using clock(), which measure CPU time rather than wall-clock time.

Keith Thompson
  • 254,901
  • 44
  • 429
  • 631
1

Your code is executed too fast to be detected by time function returning the number of seconds elapsed since 00:00 hours, Jan 1, 1970 UTC.

Try to use this piece of code:

inline long getCurrentTime() {
    timeb timebstr;
    ftime( &timebstr );
    return (long)(timebstr.time)*1000 + timebstr.millitm;
}

To use it you have to include sys/timeb.h.

Actually the better practice is to repeat your calculations in the loop to get more precise results.

eugene_che
  • 1,997
  • 12
  • 12
1

You will probably have to find a more precise platform-specific timer such as the Windows High Performance Timer. You may also (very likely) find that your compiler optimizes or removes almost all of your code.

Puppy
  • 144,682
  • 38
  • 256
  • 465