I stumbled across this question long ago
Why is processing a sorted array faster than processing an unsorted array?
and wanted to try it for myself. I had some surprising results when trying different array sizes.
Here's the test code. (C# with Microsoft compiler, I have not tested other languages)
static void Main(string[] args)
{
var rand = new Random();
Time(rand, true, 1000, 100000);
Time(rand, false, 1000, 100000);
Time(rand, true, 1000000, 100);
Time(rand, false, 1000000, 100);
Console.ReadKey(true);
}
static void Time(Random rand, bool sort, int arraysize, int loopsize)
{
var stopwatch = new Stopwatch();
var array = new int[arraysize];
array = Array.ConvertAll(array, _ => (int)(rand.NextDouble() * int.MaxValue));
if (sort)
Array.Sort(array);
stopwatch.Start();
var count = 0;
for (var j = 0; j < loopsize; j++)
for (var i = 0; i < array.Length; i++)
if (array[i] > int.MaxValue / 2)
count++;
stopwatch.Stop();
Console.WriteLine("{0} ticks, {1}ms, sort {2}, arraysize {3}, loopsize {4}",
stopwatch.ElapsedTicks, stopwatch.ElapsedMilliseconds, sort, arraysize, loopsize);
}
Here's the output.
614149 ticks, 262ms, sort True, arraysize 1000, loopsize 100000
630096 ticks, 269ms, sort False, arraysize 1000, loopsize 100000
634562 ticks, 271ms, sort True, arraysize 1000000, loopsize 100
1840846 ticks, 787ms, sort False, arraysize 1000000, loopsize 100
Relative result timings are generally the same in release vs. debug mode (with debug being about twice as slow).
The last two results, to me, make sense. It follows the logic of the question I linked above and the sorted one runs faster due to what I assume is branch prediction. The first two results, however, I'm confused by. Why are they almost the same time? Also, why are they not significantly faster than the last two? I would think that the smaller array would cause it to be in a closer cache and therefore the whole thing runs faster. (loop size * array size = constant between first two and last two, so same number of inner iterations)