I'm using junit to run a primitive benchmark like this:
@Test
public void testTime() throws Exception{
LoadoutBase<?> loadout = new LoadoutStandard("AS7-D-DC");
final int iterations = 1000000;
long start = System.nanoTime();
int sum = 0;
for(int i = 0; i < iterations; ++i){
Iterator<Item> it = loadout.iterator();
while( it.hasNext() ){
sum += it.next().getNumCriticalSlots();
}
}
long end = System.nanoTime();
long time_it = end - start;
start = System.nanoTime();
sum = 0;
for(int i = 0; i < iterations; ++i){
for(Item item : loadout.getAllItems()){
sum += item.getNumCriticalSlots();
}
}
end = System.nanoTime();
long time_arrays = end - start;
System.out.println("it: " + time_it + " array: " + time_arrays + " diff: " + (double)time_it/time_arrays);
}
If I set iterations=1000000
then I get
it: 792771504 array: 1109215387 diff: 0.7147137637029551
very consistently but if I set iterations=10000
then I get
it: 32365742 array: 28902811 diff: 1.1198129482976587
with very wild fluctuations. The diff parameters is anywhere from 0.7 to 1.2
Could any one shed some light on what might be happening here? Which method should I choose?
Edit:
What I really am benchmarking is behind the scenes work. getAllItems
creates a new List<Item>
and populates it by getting all items from 16 sublists using addAll
. The Iterator
approach doesn't construct this temporary list but rather keeps track of in which of these 16 sublists it is currently iterating and has some logic to make the 16 sublists look like a continuous list.