1
time_start();

int op = 0;
for(int i = 1; i <= n; i++)
    op += arr[i]*pow(x, i);

time_stop();

This is the part of code I want to measure, time_start() and time_stop() functions just save clock() output somewhere, and then execution_time() returns the difference, giving execution time.

the problem is that for < 50000, the output is just 0ms, or 1ms. Is this correct output? Can this be more exact?

user2251921
  • 127
  • 10
  • 1
    On which hardware architecture and OS are performing the measurement ? The thing is that on most hardware and OS the clock will have a resolution of 10ms. You will need to use "performance timers" to get below that. And this will depend on the hardware and os. – woodleg.as Apr 06 '13 at 14:58
  • See this post for windows: http://stackoverflow.com/questions/15720542/measure-execution-time-in-c-on-windows?rq=1 – woodleg.as Apr 06 '13 at 14:59
  • Please see this thread http://stackoverflow.com/questions/3557221/how-do-i-measure-time-in-c#comment3728541_3557274 – Shimon Tolts Apr 06 '13 at 15:00
  • There is [`gettimeofday`](http://linux.die.net/man/2/gettimeofday) function in Linux/UNIX with microsecond resolution – osgx Apr 06 '13 at 15:00

1 Answers1

1

You need to have enough "scale" between the runtime and the resolution of your timer code to get accurate and measurable results. The simplest solution is to iterate hundreds (or thousands) of times over the small piece of code being tested.

There are possible cache effects of any approach though, so be sure that you're measuring what you actually think you are.

Randy Howard
  • 2,165
  • 16
  • 26