1

I was attempting to find out which was faster, OrderByDescending() or Sort() and then Reverse(). To find out I wrote a simple test in LINQPad. Obviously I can just look at my numbers to figure out which is faster, but if possible, I'd like to know the why. What is happening under the hood to make OrderByDescending() slower?

I can guess, but how do I find out for sure. Also, as a followup, are my results accurate? I get a wide disparity when running. OrderByDescending() is currently showing (changes slightly with every run)

Avg: 2.927742 | Min: 2 | Max: 7934

and Sort then Reverse

Avg: 0.870451 | Min: 0 | Max: 526

In order for the average to be so close to the minimum, then the bulk of the million operations has to be close to it. If I take the top 100 of each, then I see plenty towards the high side. Is there some sort of optimization going on that I'm missing?

Overall, how does one find out the why of an efficiency (and settle the irregularities in the data) vs just profiling it to figure out which is faster between two?

var sw = new Stopwatch();
var times = new List<long>();
for (var i = 0; i < 1000000; i++)
{
    var foo = new List<Version>
    {
        new Version("2.0.0.0"),
        new Version("1.0.0.0"),
        new Version("2.0.0.0")
    };
    sw.Start();
    var bar = foo.OrderByDescending(f => f).ToList();
    sw.Stop();
    times.Add(sw.ElapsedTicks);
    sw.Reset();
    foo = null;
    bar = null;
}
"OrderByDescending times:".Dump();
times.Average().Dump();
times.Min().Dump();
times.Max().Dump();

var sw1 = new Stopwatch();
var times1 = new List<long>();
for (var i = 0; i < 1000000; i++)
{
    var foo = new List<Version>
    {
        new Version("2.0.0.0"),
        new Version("1.0.0.0"),
        new Version("2.0.0.0")
    };
    sw1.Start();
    foo.Sort();
    foo.Reverse();
    sw1.Stop();
    times1.Add(sw1.ElapsedTicks);
    sw1.Reset();
    foo = null;
}
"Sort then Reverse times:".Dump();
times1.Average().Dump();
times1.Min().Dump();
times1.Max().Dump();

times.OrderByDescending(t => t).Take(100).Dump();
times1.OrderByDescending(t => t).Take(100).Dump();
gilliduck
  • 2,762
  • 2
  • 16
  • 32
  • Linq by itself does not implement `OrderByDescending` - it's just an engine that represents syntax tress to consumers - so your question is kinda meaningless. – Dai Mar 05 '21 at 03:03
  • 1
    `List.Sort` will always be the fastest and use the least memory because it's an in-memory implementation of quicksort, whereas Linq's `OrderBy` (assuming Linq-to-Objects) performs a buffer copy. – Dai Mar 05 '21 at 03:05
  • 1
    `List.Sort` can move items around by reference. `.OrderByDescending` caches intermediate results in memory, `.ToList` then must creates an entirely new list. – Jeremy Lakeman Mar 05 '21 at 03:05
  • Can one (or both) of you expand your comments into a proper answer? In particular if you can attach a link to some additional reading/docs that can expand on your answer? What you're saying is making sense, I'm just hoping for a way to tell vs "just knowing". – gilliduck Mar 05 '21 at 03:09
  • 1
    This question is not a duplicate as stated above. A profiler is certainly useful, but it will be hard to see why `List.Sort` outperforms the `LINQ` variant. Anyway, the answer to the question is that `List.Sort` calls `Array.Sort` which in turn calls a highly optimized sorting method - [TrySZSort](https://referencesource.microsoft.com/#mscorlib/system/array.cs,1775) - in native code. – l33t Mar 11 '21 at 15:24

0 Answers0