I have the following pieces of code:
long start = System.currentTimeMillis();
for(int i = 0; i < keys.length; ++i) {
obj.getElement(keys[i]);
}
long total = System.currentTimeMillis() - start;
System.out.println(total/1000d + " seconds");
And the following:
long start = System.currentTimeMillis();
for(int i = 0; i < keys.length; ++i) {
obj.hasElement(keys[i]);
}
long total = System.currentTimeMillis() - start;
System.out.println(total/1000d + " seconds");
The implementations of these methods are:
public T getElement(int key) {
int idx = findIndexOfElement(key);
return idx >= 0? ITEMS[idx]:null;
}
public boolean hasElement(int key) {
return findIndexOfElement(key) >= 0;
}
Pretty straightforward. The only difference between the 2 methods is the conditional access to the table.
Problem: When actually measuring the performance of these snippets the getElement
takes twice the time than the hasElement
.
So for a series of tests I get ~2.5seconds for the first loop of getElement
and ~0.8 secs for the second loop of hasElement
.
How is it possible to have such a big difference? I understand that the conditional statement is a branch and jump but still seems to me too big.
Is there a way to improve this?
Update:
The way I measure is:
long min = Long.MAX_VALUE;
long max = Long.MIN_VALUE;
long run = 0;
for(int i = 0; i < 10; ++i) {
long start = System.currentTimeMillis();
for(int i = 0; i < keys.length; ++i) {
obj.getElement(keys[i]);
}
long total = System.currentTimeMillis() - start;
System.out.println(total/1000d + " seconds");
if(total < min) {
min = time;
}
if(total > max) {
max = time;
}
run += time;
for(int i = 0; i < 50; ++i) {
System.gc();
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
System.out.println("min=" + min + " max=" + max);
System.out.println("avg = " + (double)run/1000/keys.length);