I was attempting to find out which was faster, OrderByDescending()
or Sort()
and then Reverse()
. To find out I wrote a simple test in LINQPad. Obviously I can just look at my numbers to figure out which is faster, but if possible, I'd like to know the why. What is happening under the hood to make OrderByDescending()
slower?
I can guess, but how do I find out for sure. Also, as a followup, are my results accurate? I get a wide disparity when running. OrderByDescending()
is currently showing (changes slightly with every run)
Avg: 2.927742 | Min: 2 | Max: 7934
and Sort then Reverse
Avg: 0.870451 | Min: 0 | Max: 526
In order for the average to be so close to the minimum, then the bulk of the million operations has to be close to it. If I take the top 100 of each, then I see plenty towards the high side. Is there some sort of optimization going on that I'm missing?
Overall, how does one find out the why of an efficiency (and settle the irregularities in the data) vs just profiling it to figure out which is faster between two?
var sw = new Stopwatch();
var times = new List<long>();
for (var i = 0; i < 1000000; i++)
{
var foo = new List<Version>
{
new Version("2.0.0.0"),
new Version("1.0.0.0"),
new Version("2.0.0.0")
};
sw.Start();
var bar = foo.OrderByDescending(f => f).ToList();
sw.Stop();
times.Add(sw.ElapsedTicks);
sw.Reset();
foo = null;
bar = null;
}
"OrderByDescending times:".Dump();
times.Average().Dump();
times.Min().Dump();
times.Max().Dump();
var sw1 = new Stopwatch();
var times1 = new List<long>();
for (var i = 0; i < 1000000; i++)
{
var foo = new List<Version>
{
new Version("2.0.0.0"),
new Version("1.0.0.0"),
new Version("2.0.0.0")
};
sw1.Start();
foo.Sort();
foo.Reverse();
sw1.Stop();
times1.Add(sw1.ElapsedTicks);
sw1.Reset();
foo = null;
}
"Sort then Reverse times:".Dump();
times1.Average().Dump();
times1.Min().Dump();
times1.Max().Dump();
times.OrderByDescending(t => t).Take(100).Dump();
times1.OrderByDescending(t => t).Take(100).Dump();