1

so I was using rusage to print out how long it takes from the user, and system to process a command, something along the lines of,

           //DO STUFF HERE
           printf(" TOTAL TIMES: ");
           tusage.ru_utime.tv_sec  = rusage.ru_utime.tv_sec + rusage.ru_stime.tv_sec;
NoNameY0
  • 612
  • 2
  • 8
  • 18
  • What's the problem with using `getrusage` in linux? – Art Mar 14 '13 at 08:02
  • use `gettimeofday`, see [this](http://stackoverflow.com/questions/2150291/how-do-i-measure-a-time-interval-in-c) – Fredrik Pihl Mar 14 '13 at 08:07
  • @Art the same code prints 0.00 in linux, but same code prints actual value in mac os why is that? – NoNameY0 Mar 14 '13 at 08:07
  • @RichardMckenna, because linux is fast ;D – perreal Mar 14 '13 at 08:08
  • @RichardMckenna because your program didn't take any measurable time to run? Your exact example works for me (even if I question why you'd use a struct rusage for storage for the sum instead of just a struct timeval). – Art Mar 14 '13 at 08:09
  • @FredrikPihl I need the amount of time it took the system to process it, the user, and thats generally given by rusage – NoNameY0 Mar 14 '13 at 08:09
  • add a big loop in your code and see if you can get a non-zero value – perreal Mar 14 '13 at 08:10
  • @perreal I just did and still got 0 – NoNameY0 Mar 14 '13 at 08:11
  • @RichardMckenna Then the loop wasn't long enough or the compiler optimized it away. – Art Mar 14 '13 at 08:11
  • @Art I just ran a disk usage du -sh \usr and it still printed 0 – NoNameY0 Mar 14 '13 at 08:12
  • @RichardMckenna Ran it how? – Art Mar 14 '13 at 08:13
  • If the same code works on a mac it should also work on linux right? – NoNameY0 Mar 14 '13 at 08:13
  • @Art look at my updated code that is what works on mac but does prints 0 for Linux ubuntu – NoNameY0 Mar 14 '13 at 08:14
  • @RichardMckenna, try the one below, does it print 0? – perreal Mar 14 '13 at 08:15
  • @perreal I updated my code please look at it. – NoNameY0 Mar 14 '13 at 08:18
  • If you ran something by either `fork`/`exec` yourself or with `system`, it will not take any time in your process. That's what `SELF` in `RUSAGE_SELF` means. If you want to measure how long time your children processes took, use `RUSAGE_CHILDREN`. At least that's what I decode from your "du -sh /usr" comment. You updated code just removed the call to `getrusage`. – Art Mar 14 '13 at 08:18
  • where would I put the RUSAGE_CHILDREN? in the beginning? – NoNameY0 Mar 14 '13 at 08:25
  • replace the `RUSAGE_SELF` with `RUSAGE_CHILDREN` – perreal Mar 14 '13 at 08:27
  • @Art you are right, I am execing, forking, in my large program, but on a mac I get the behavior I want using netbeans. Why cant I get the same fricking output on linux when compiled with the same gcc. I don't even feel like changing my logic cause IT WORKS EXACTLY AS EXPECTED on a Mac using netbeans – NoNameY0 Mar 14 '13 at 08:28
  • It doesn't. MacOS has the exact same behavior of the `getrusage` call. It's POSIX and it has been a standard in Unix since the middle of the 80s. Possibly MacOS has more precise timers. Possibly Linux has more optimized fork/exec so that their clock quantum can't measure it. But `getrusage` does the same thing on both systems. – Art Mar 14 '13 at 08:34
  • @Art If my program DOES have some execs but ideally I want to calculate the TIME it took it run a command, what would I be measuring child or self? I feel like self would include child and if I do just child then I am not including the work the self did to get to the child, etc, etc. – NoNameY0 Mar 14 '13 at 08:36
  • Read the man page for `getrusage` it explains exactly what it measures. If you want the total sum of resource usage for the children of your process, `getrusage` is enough. If you want the resource usage for specific children of your process, read the man page for `wait4`. If you want the time to run a command, instead of rolling your own you can just use "time " in the shell. – Art Mar 14 '13 at 08:40
  • ok so my rusage approach is good. If a command takes 0.0030 seconds on a mac on average is it okay to be 0.000 on linux? – NoNameY0 Mar 14 '13 at 08:41
  • Please take any discussion to [chat] and edit any useful information into the question. – ChrisF Mar 14 '13 at 10:16

1 Answers1

1

Try this:

#include <stdio.h>
#include <sys/resource.h>
int main() {
  struct rusage rusage;
  struct rusage tusage;
  int i, j, r=0;
  for (i = 0; i < 10000; i++) {
    for (j = 1; j < 100000; j++) {
      r = i % j + i / j;
    }
  }
  getrusage(RUSAGE_SELF, &rusage);
  printf("TOTAL TIME \n");
  tusage.ru_utime.tv_sec =  rusage.ru_utime.tv_sec + rusage.ru_stime.tv_sec;
  tusage.ru_utime.tv_usec = rusage.ru_utime.tv_usec + rusage.ru_stime.tv_usec;
  tusage.ru_utime.tv_sec += tusage.ru_utime.tv_usec / 1000000;
  tusage.ru_utime.tv_usec = tusage.ru_utime.tv_usec % 1000000;
  printf("%ld.%06ld\n", tusage.ru_utime.tv_sec, tusage.ru_utime.tv_usec);

  return r;
}
perreal
  • 94,503
  • 21
  • 155
  • 181
  • no it did not. But why does something on a mac take long yet all my realistic commands are showing up as 0.00 – NoNameY0 Mar 14 '13 at 08:17
  • I don't really know, just a call to getrusage returns non-zero time for me (`0.001000`). – perreal Mar 14 '13 at 08:23