Here is how I tested. The first few cycles are slower, but after it warms up, the times converge.
final int N = 10000000;
final int M = 10000000;
//create array and populate random elements
int[] arr = new int[N];
for(int i=0; i<N; i++) {
arr[i] = (int) (Math.random() * 10000);
}
//start a timer
Timer timer = new Timer();
int sum = 0;
for(int i=0; i<10; i++) {
for(int j=0; j<M; j++) {
//access random element
sum += arr[(int) (Math.random() * N)];
}
timer.lap();
}
System.out.println(sum);
With an array of 1 billion elements, each loop takes 150ms to finish. However, with 300 million elements, it only takes 120ms. And with 100 million elements, it only takes 110ms.
With arrays this large, I don't think the cache could make any meaningful difference. For example, with 100M elements, that is 400MB, while the L3 cache on my computer is only 9MB.
What is causing the speed difference? I thought array size had no impact on the speed of access.