I performed a short benchmark on a long array in java with quite strange results. It seems that sequential reads with random writes are faster - half the time - than random reads with sequential writes. Has anyone a clue why??
Here are two methods that write an array of some longs (run with -Xmx2G or so) by random when reading sequentially and read sequentially when writing by random:
import java.util.Random;
public class Scratch {
static Random random = new Random();
static long[] arr = new long[100000000];
static void seqReadRandWrite() {
for(int i=0;i<arr.length;i++) {
int at = random.nextInt(arr.length);
arr[at] = arr[i];
}
}
static void seqWriteRandRead() {
for(int i=0;i<arr.length;i++) {
int at = random.nextInt(arr.length);
arr[i] = arr[at];
}
}
public static void main(String[] args) throws Exception {
seqWriteRandRead(); // warm up
long nanos = System.nanoTime();
seqReadRandWrite();
System.out.println("Time: " + (System.nanoTime()-nanos) + "ns");
nanos = System.nanoTime();
seqWriteRandRead();
System.out.println("Time: " + (System.nanoTime()-nanos) + "ns");
}
}
the results on my notebook are
Time: 2774662168ns
Time: 6059499068ns
Which means its twice as fast to write by random compared to read .. or? Is my notebook broken?
ps.: this does not claim to be a benchmark, although most of the points in the linked advices about benchmarking are covered. Even if I run the already 200,000,000 operations multiple times, the resuts stay quite constant. It seems (seems!) that moving memory from random positions into sequential blocks is slower than moving memory from sequential positions into random blocks at least with memory of this size and the above way of doing it. and I wonder why?