A simple C program run on windows 10.(Visual Studio 2013)
#include <time.h>
void hello(){
printf("hello world");
}
int _tmain(int argc, _TCHAR* argv[])
{
clock_t t;
for (int i = 0; i<50; i++){
t = clock();
hello();
t = clock() - t;
double time_taken = ((double)t) / CLOCKS_PER_SEC; // in seconds
printf("hello() took %f ms to execute \n", time_taken * 1000);
}
getchar();
return 0;
}
Output:
hello worldhello() took 0.000000 ms to execute(35times)
hello worldhello() took 17.000000 ms to execute
hello worldhello() took 3.000000 ms to execute
hello worldhello() took 2.000000 ms to execute
hello worldhello() took 0.000000 ms to execute(5 times)
hello worldhello() took 15.000000 ms to execute
hello worldhello() took 0.000000 ms to execute(4 times)
hello worldhello() took 16.000000 ms to execute
hello worldhello() took 0.000000 ms to execute
some lines are 0.000000 and some lines are 15.000000-17.000000
those outputs may not be exactly same as outputs of second run. but in the second/third.. run there must be some lines containing 0.000000ms and 15.000000-17.000000ms.
0ms to 16ms (is it for process CPU time ?). Would you please explain what is the actual reason.
If I want to avoid this kind of change and get uniform output like 0-1 ms, then how can I change my code. (Loop running for 50 times but if I run it for 100 or 1000 times then time effect can easily be understood.)