I am currently testing in Java whether the use of a custom class or int arrays as a datatype is faster and ran speed tests for that purpose (see below).
int repetitions = 1000000000;
for (int i = 0; i < repetitions*10; i++) {
Move move3 = new Move(2, 3);
int sum3 = move3.x + move3.y;
}
// object block
long start = System.nanoTime();
for (int i = 0; i < repetitions; i++) {
Move move2 = new Move(2, 3);
int sum2 = move2.x + move2.y;
}
System.out.println("Object took " + (System.nanoTime() - start) + "ns");
// array block
long start2 = System.nanoTime();
for (int i = 0; i < repetitions; i++) {
int[] move = new int[]{2, 3};
int sum = move[0] + move[1];
}
System.out.println("Array took " + (System.nanoTime() - start2) + "ns");
Swapping around the timing blocks basically decides what is faster. This was by a factor of 4 originally, but running the meaningless first block beforehand reduced it to a factor of about 1.1. Why is this? Is there a "warmup period" for the JVM? And how could I get completely rid of the issue?
(I know, I can get all performance-related information from the fact that swapping changes the advantage, but I am curious what is happening here.)