2

when I run c code in linux,the code always doesnt print out the elapse time,and the result always is 0.The code is just as follow:

#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
void main(int argc,char* argv[]){
  int n;
  if(argc == 2){
    n = atoi(argv[1]);
  }
  struct timeval start, end;
  gettimeofday(&start, 0);
  int r = fib(n);
  gettimeofday(&end, 0);
  long mtime, s,us;
  s = end.tv_sec  - start.tv_sec;
  us = end.tv_usec - start.tv_usec;
  printf("s=%f,us=%f  \n", s, us);
  mtime = (s*1000 + us/1000.0)+0.5;
  printf("Fib result for %d is: %d;elapsing %f \n", n, r, mtime); 

}

int fib(int n){
  if(n == 0) return 0;
  if(n == 1) return 1;
  return fib(n-1)+fib(n-2);
}
Eric_Chen
  • 227
  • 1
  • 4
  • 13

5 Answers5

6

Don't overlook your compiler warnings; you're trying to print three long variables (mtime, s, and us) as if they were doubles:

fib.c: In function ‘main’:
fib.c:17:3: warning: format ‘%f’ expects type ‘double’, but argument 2 has type ‘long int’
fib.c:17:3: warning: format ‘%f’ expects type ‘double’, but argument 3 has type ‘long int’
fib.c:19:3: warning: format ‘%f’ expects type ‘double’, but argument 4 has type ‘long int’

Change s and us to long, and change the format for s and us to %ld, and the program compiles (and runs) without fault.

sarnold
  • 102,305
  • 22
  • 181
  • 238
6

All the suggestions do in fact work, but the granularity of the time measurement is big (typically 10 to 100 milliseconds). So it actually measure something for a computation which last e.g. half a second. On current processors (running at 2 to 3Ghz, with about 3-5 instructions per cycle), that means something like a billion machine instructions executed (an "elementary step" in our C program -with an ill-defined notion of step is usually a dozen machine instructions). So your test is too small, you really should compute a million times fibionacci (10).

To be more specific the program below (where some computations are output, to avoid optimizing them all) is running in about 2 seconds. (on million computations of fibionacci of something less than 16).

#include <stdio.h>
#include <unistd.h>
#include <time.h>
long fib(int n){
  if(n == 0) return 0;
  if(n == 1) return 1;
  return fib(n-1)+fib(n-2);
}

int main ()
{
  int i=0;
  int p = (int) getpid();
  clock_t cstart = clock();
  clock_t cend = 0;
  for (i=0; i<1000000; i++) {
    long f = fib(i%16);
    if (i % p == 0) printf("i=%d, f=%ld\n", i, f);
  }
  cend = clock();
  printf ("%.3f cpu sec\n", ((double)cend - (double)cstart)* 1.0e-6);
  return 0;
}   

The last few lines output with time ./fib (compiled with gcc -O2 -Wall fib.c -o fib) are

i=936079, f=610
i=948902, f=8
i=961725, f=233
i=974548, f=3
i=987371, f=89
2.140 cpu sec
./fib  2.15s user 0.00s system 99% cpu 2.152 total

benchmarking a run smaller than about a second is not very meaningful

(and you can use the time command to measure such a run)

See also time(7) and clock_gettime(2).

Basile Starynkevitch
  • 223,805
  • 18
  • 296
  • 547
2

It might be easier to use the clock function:

clock_t start = clock();
int r = fib(n);
clock_t end = clock();
printf("Elapsed time: %.2f seconds\n", (double)(end - start) / CLOCKS_PER_SEC);
Some programmer dude
  • 400,186
  • 35
  • 402
  • 621
1

The resolution of the real time clock is probably not very small (perhaps 10 or 25 milliseconds), and your computation is too short to be significant. You could put your computation inside a loop (e.g. repeating it several thousand times).

You could also consider measuring the CPU time, using the clock function.

You could also use the clock_gettime function to get perhaps better results.

And as other people told you, please ask for all warnings with gcc -Wall and take them into account. If you care after performance (but remember that premature optimization is evil, so get your program right first!) consider enabling optimizations (e.g. gcc -Wall -O2) during compilation.

Basile Starynkevitch
  • 223,805
  • 18
  • 296
  • 547
0

This should give you elapsed time:

#include <iostream>
#include <sys/time.h> /* gettimeofday */

int main() {
    /* get begin time */
    timeval begin;
    ::gettimeofday(&begin, 0);
    /* do something... */
    ::usleep(153);
    /* get end time */
    ::timeval current;
    ::gettimeofday(&current, (struct timezone*) 0);
    /* calculate difference */
    double elapsed = (current.tv_sec - begin.tv_sec) + ((current.tv_usec
            - begin.tv_usec) / 1000000.0F);
    /* print it */
    std::cout << elapsed << std::endl;
    return 0;
}
Alessandro Pezzato
  • 8,603
  • 5
  • 45
  • 63
  • I tried the method in c,and it really worked,but when I used it like the code I posted.It turned out to not work – Eric_Chen Nov 23 '11 at 13:24