4

As I wrote in my previous topic: Benchmarking code - am I doing it right? I need to find a way to get benchmark statistics, like average, mean, standard deviation, etc. How can I do this using those methods I posted? Notice that I use a solution to benchmark code with time interval, not by calling a function many times. Any ideas?

I came up with just one, dont know if its correct (pseudocode):

buffsize = 1024;
buffer [buffsize];
totalcycles = 0

// arrays
walltimeresults = []
cputimeresults = []

// benchmarking
for i in (0, iterations):
   start = walltime();
   fun2measure(args, buffer);
   end = walltime();
   walltimeresults[i] = end - start;

   start = cputime();
   fun2measure(args, buffer);
   end = cputime();
   cputimeresults[i] = end - start;

   c1 = cyclecount();
   fun2measure(args, buffer);
   c2 = cyclecount();

   cyclesperbyte = c2-c1/(buffsize);
   totalcycles += cyclesperbyte;

for i in range (0, iterations) : sum += walltimeresults[i];
avg_wall_time = sum / iterations;

sum = 0;

for i in range (0, iterations) : sum += cputimeresults[i];
avg_cpu_time = sum / iterations;

avg_cycles = totalcycles / iterations;

Is it correct? How about mean, standard deviation, etc?

Community
  • 1
  • 1
nullpointer
  • 245
  • 3
  • 6
  • 15

1 Answers1

2

Your average looks OK.

Mean (i.e. average) is

mean = 1/N * sum( x[i] )

Standard deviation is square root of variance:

sigma = sqrt( 1/N * sum( (x[i]-mean)^2 )
Mike Dunlavey
  • 40,059
  • 14
  • 91
  • 135
  • thanks! Any other advices about the code? What can I improve, change here, etc? – nullpointer Jul 30 '13 at 20:34
  • 1
    @nullpointer: I would pop up a level and ask what is your overall purpose? If it's got anything to do with finding the fastest algorithm, I would be less concerned with measurement. I would be more concerned with getting penetrating insight into how to make the programs faster. If that's the goal, you [*might find this helpful*](http://stackoverflow.com/a/1779343/23771). You'd be amazed how much fat can be trimmed from supposedly optimal programs. – Mike Dunlavey Jul 30 '13 at 20:51
  • thank you once again. I will read this topic but my purpose isn't to find the fastest algorithm but to measure it's performace, that's all. I know that a single test (one function call) is meaningless in benchmarks. Im just wondering if I do this right, if I can get with my measurements 'usable' results. – nullpointer Jul 30 '13 at 21:15
  • Also, I would like to ask you, what should I do when a function call is too 'short' in time that a single call shows 0.00000 seconds in each iteration? Can I do this: http://pastie.org/private/r2oxfjfwohupg1ypcjxgnw, is it correct? – nullpointer Jul 31 '13 at 17:24
  • 1
    @nullpointer: I like stack-sampling, but even `gprof` should tell you the inclusive time. Essentially, if an overall program takes, say, 100 seconds, and in that time function `foo` appears on, say, 10% of stack samples, then the total inclusive time it takes is 10% of 100, or 10 seconds. If you divide that by the number of times it is called, that gives you the time per call. If it runs for too short a time to get samples, just wrap a loop of like 1000 iterations around it. Various profilers can take stack samples automatically, like oprofile and [*Zoom*](http://www.rotateright.com/). – Mike Dunlavey Jul 31 '13 at 17:54