0

http://img708.imageshack.us/img708/5089/itfib.png Its stuck at 0.000000000 on all results. I posted this up before and people were able to get a number on it but every time I try, it always gives me those 0s.

#include<time.h>
#include<sys/file.h>
#include<stdio.h>

int main ( )
{
  int j=1, fib, n, i=1, k=0;
  int choice;
  float x,y,z;

  printf("input the fib number you want: ");
  scanf("%d", &n);

  x = clock(); //start clock

  while (k <=n)
  {
    fib = i + j;
    i = j;
    j = fib;
    ++k;

    printf( "The fib number is %d\n ", fib);
  }

  y = clock(); // end clock
  z = (y - x) / CLOCKS_PER_SEC;

  printf("\n\nThe execution time was: %.15f", z);
  return 0;
} 
General Grievance
  • 4,555
  • 31
  • 31
  • 45
  • 3
    And how fast is this running? `clock()` only measures CPU time taken by your process (not wall clock time), and if `clock()`'s resolution sucks on your system, you'll get about 0 seconds running time... – Cornstalks Mar 01 '13 at 04:50
  • Three people have already overlooked this and made wrong comments. So for anyone who reads this question: **`x` and `y` are already `float`s**. *There is no integer division happening here*. – Cornstalks Mar 01 '13 at 04:55
  • Have you tried making `n` really large? I made `n` about `10000` and `z` was about `1.135000000000000` – Anish Ramaswamy Mar 01 '13 at 04:55
  • lol 10000 made the program loop non stop with random numbers. –  Mar 01 '13 at 05:02
  • @naminate They are not random - they are the product of integer overflow. You will note it is the same sequence of 'random' numbers every time. If you use a different data type you will get different results. – paddy Mar 01 '13 at 05:05
  • I used the same clock function in another program to print a word 5 times and I was able to get a number.. but why doesnt this work?? >:[ –  Mar 01 '13 at 05:06
  • I was not able to get it on the recursive fibonacci either. –  Mar 01 '13 at 05:08
  • @Naminate, Just to make sure, make your while loop as follows, `while(k < n){ k++; }` and input `n` as `100000`. Then tell us what your output is. – Anish Ramaswamy Mar 01 '13 at 05:14
  • @AnishRam 100000 gave me 0.150 < the rest are 0s. Yay! So now I know the clock is working.. So its just my CPU thats being too good? >o –  Mar 01 '13 at 05:28
  • I guess your CPU takes very little clock cycles to perform these tasks. I would say if your `n` is always going to be low (~100), then use a clock which has higher granularity like the one suggested in [this SO answer](http://stackoverflow.com/a/6749766/1383051). The clock you use measures time based on the amount of CPU cycles it took to perform the task. – Anish Ramaswamy Mar 01 '13 at 06:18

3 Answers3

1

With n=30, your program does such a tiny amount of work that it will show up as zero time with a coarse clock granularity. On some systems, clock ticks once every 10ms. Assuming you have reasonably fast console I/O, you are probably in the tens- or hundreds- of microseconds range, and are probably spending 99.99% of the time in printf.

Try inputting a larger number like 1000000. Then you should get something nonzero.

nneonneo
  • 171,345
  • 36
  • 312
  • 383
  • Like Anish Ram said. I tried to put 10000 in and I got a nonstop loop. –  Mar 01 '13 at 05:14
  • @Naminate: It's not *nonstop*, it's just long-running. (You are also likely to see a lot of garbage values pop out because of integer overflow). – nneonneo Mar 01 '13 at 05:16
  • @Naminate, This answer exactly. First just make sure your timing works. Then debug the integer overflows. Of course, if you have a constraint that `n` is always a low integer value, then you needn't worry about integer overflows. – Anish Ramaswamy Mar 01 '13 at 05:19
  • Well, you can't really "debug" the integer overflows; there's no way to avoid them without a big integer implementation. :) – nneonneo Mar 01 '13 at 05:20
  • Haha by debug I meant solve that problem. Bad usage of the word "debug" I admit – Anish Ramaswamy Mar 01 '13 at 05:21
  • Yea. My bad. I decided to let it run and it came to an end 2 mins later lol. –  Mar 01 '13 at 05:30
  • But why is that I was able to get a time on a program that only prints a word 5 times rather than this program?? Doesn't this program take longer? –  Mar 01 '13 at 05:33
  • @Naminate: and when it came to an end, was the clock zero? – nneonneo Mar 01 '13 at 05:35
  • Scroll up a little. I posted the results that Anish asked on top. But no. I was able to get a number. –  Mar 01 '13 at 05:39
  • "@AnishRam 100000 gave me 0.150 < the rest are 0s. Yay! So now I know the clock is working.. So its just my CPU thats being too good? >o<" –  Mar 01 '13 at 05:40
  • @Naminate, Did you test that word printing program on the same computer as the one you're using to test this fibonacci program? – Anish Ramaswamy Mar 01 '13 at 05:43
  • @AnishRam Yes. I am using Linux for both programs. Using the same computer as well. –  Mar 01 '13 at 05:45
  • When I did the word program, I got the result as like 0.000200000 somewhere like that. –  Mar 01 '13 at 05:47
0

try this

struct timeval start, end;
long mtime, secs, usecs;    

gettimeofday(&start, NULL);
// your processing.
gettimeofday(&end, NULL);
secs  = end.tv_sec  - start.tv_sec;
usecs = end.tv_usec - start.tv_usec;
mtime = ((secs) * 1000 + usecs/1000.0) + 0.5;
printf("Elapsed time: %ld millisecs\n", mtime);
Travis G
  • 1,592
  • 1
  • 13
  • 18
  • Thanks for your input. But I rather not change the code for clock since my professor is the one who gave out the code to use in this program. –  Mar 01 '13 at 05:17
-1

The problem is that you are doing integer division and then converting it to a float. Otherwise your result is rounded down to the nearest second. You need to cast (y-x) to a float before dividing:

z = (float)(y-x) / CLOCKS_PER_SEC;

If your program runs faster than the clock granularity, it will come out at zero no matter what you do (because y and x will be the same). But at least with floating point division, you can get the fractional seconds otherwise.

[edit] My mistake, actually. For some reason I thought x and y were integers (because you were using them to store the return value of clock). So in this case, you probably are running faster than the clock cycle.

When you want to benchmark a fast operation, you need to do one or both of the following:

  • repeat the operation many times (a thousand, a million...)
  • use a higher resolution timer
paddy
  • 60,864
  • 6
  • 61
  • 103
  • He's not doing integer division. `x` and `y` are already `float`s. – Cornstalks Mar 01 '13 at 04:54
  • 1
    *"If your program runs faster than the clock granularity, it will come out at zero no matter what you do (because y and x will be the same). **But at least with floating point division, you can get the fractional seconds otherwise.**"* - If the program runs faster than the clock resolution then of course `x - y` will be 0 as well. You can't measure increments smaller than the resolution, integer division or not... – Ed S. Mar 01 '13 at 04:55
  • Please note the use of the term "otherwise". That was a reference to my initial thought that it was integer division. I corrected this with my edit. Thank you for the down vote in any case. – paddy Mar 01 '13 at 04:58
  • Hehe, ahh well, so much for correcting one's answer. =) I'll wear it for not reading carefully enough initially. Fibonacci sequence has a similar growth curve to a power series. *ie* it's pretty much exponential. As such, it will overflow a 32-bit (or even 64-bit) integer in a very small number of iterations. You cannot measure performance unless you repeat the calculation thousands of times (and then you need to make sure that you derive some final output from those calculations to stop the compiler from optimizing them away). As others have said, the `printf` is the significant factor. – paddy Mar 01 '13 at 05:11
  • I downvoted. Nothing personal, it was just a wrong answer... that's what votes are for. – Ed S. Mar 01 '13 at 05:43