I'm trying to measure the execution time of a loop, which is a simple Add Matrices. here's my code:
//get integers m and n from user before this.
long start,end,time;
int[][] a = new int[m][n];
int[][] b = new int[m][n];
int[][] c= new int[m][n];
start = getUserTime();
for(int i = 0;i < m;i++)
{
for(int j = 0;j < n;j++)
{
c[i][j] = a[i][j]+b[i][j];
}
}
end = getUserTime();
time = end - start;
/** Get user time in nanoseconds. */
public long getUserTime() {
ThreadMXBean bean = ManagementFactory.getThreadMXBean( );
return bean.isCurrentThreadCpuTimeSupported( ) ?
bean.getCurrentThreadUserTime() : 0L;
}
the problem is, sometimes it returns 0, for example when I input 1000 as m and n. which means I have two 1000x1000 matrices being added. sometimes it returns 0 and sometimes 15ms (both keep getting repeated).
I don't know whether to believe 15ms or 0. and there is a big difference between them. I know the accuracy is OS dependent and not really nanoseconds accurate but 15miliseconds is way off to be an accuracy problem.
EDIT: the very goal of this code was to measure CPU performance on the loop. so if possible I want the effect of Compiler optimization and OS context switching etc to be minimal.
many thanks.