What is the fastest known sort algorithm for absolute worst case? I don't care about best case and am assuming a gigantic data set if that even matters.
-
3Tell us more about your specific scenario so people can suggest pros and cons of common sort algorithms. Otherwise I don't think there is any definitive answer. – Brian Ensink Apr 21 '09 at 15:45
-
2I suggest you make it clear whether you care only about Big O notation or whether the constants involved in the ON log N) implementations matter. Radix sorts and the like add some confusion since they are very data dependent (and your question is too short) – ShuggyCoUk Apr 21 '09 at 15:49
-
I am talking if the worst possible case for one algorithm is n^2 and the other is n-log-n, the latter would win for a large data set even if the scenerio for the n^2 is very very rare to come by. – GBa Apr 21 '09 at 15:50
-
@Greg: O(n log n) is the theoretical best possible complexity for any comparison-based sort. There are a number of algorithms that have O(n log n) worst-case complexity (see the table at http://en.wikipedia.org/wiki/Sorting_algorithm). – Michael Myers Apr 21 '09 at 15:53
-
Please update the question title to reflect the specific question, I recommend: "What sort algorithm provides the best worst-case performance?" – Mark Renouf Apr 21 '09 at 15:56
-
+ to tweakt but I would suggest "Algorithm*s*" since there are multiple candidates (with pluses and minuses that are only relevant with more context) – ShuggyCoUk Apr 21 '09 at 16:00
-
That is not true, Greg. The only thing dataset size does is make other factors small enough to fall away. It doesn't at all change the fact that some sorts are way quicker if your data is already almost sorted, and some are way slower in that case. – T.E.D. Apr 21 '09 at 16:03
-
@ted I suspect Greg would then add the rider 'with the worst possible initial inputs' as well – ShuggyCoUk Apr 21 '09 at 16:14
-
In main memory or out of main memory? Once you start hitting the disk (whether as files or as virtual memory), performance can change dramatically. – David Thornley Apr 21 '09 at 17:04
16 Answers
make sure you have seen this:
visualizing sort algorithms - it helped me decide what sort alg to use.

- 57,086
- 61
- 201
- 257
-
2Visualizing sort algorithm is a wonderful way to experiencing different algorithms but it's also good to note something like http://www.hatfulofhollow.com/posts/code/visualisingsorting/index.html – nevets1219 Apr 21 '09 at 15:50
Depends on data. For example for integers (or anything that can be expressed as integer) the fastest is radix sort which for fixed length values has worst case complexity of O(n). Best general comparison sort algorithms have complexity of O(n log n).

- 131,205
- 36
- 218
- 244
-
+1 I just realized that this is the fastest if N>10, Best and Worst Case: O(n) – TStamper Apr 21 '09 at 16:21
If you are using binary comparisons, the best possible sort algorithm takes O(N log N) comparisons to complete. If you're looking for something with good worst case performance, I'd look at MergeSort and HeapSort since they are O(N log N) algorithms in all cases.
HeapSort is nice if all your data fits in memory, while MergeSort allows you to do on-disk sorts better (but takes more space overall).
There are other less-well-known algorithms mentioned on the Wikipedia sorting algorithm page that all have O(n log n) worst case performance. (based on comment from mmyers)

- 188,989
- 46
- 291
- 292

- 11,672
- 5
- 39
- 38
For the man with limitless budget
Facetious but correct: Sorting networks trade space (in real hardware terms) for better than O(n log n) sorting!
Without resorting to such hardware (which is unlikely to be available) you have a lower bound for the best comparison sorts of O(n log n)
O(n log n) worst case performance (no particular order)
Beating the n log n
If your data is amenable to it you can beat the n log n restriction but instead care about the number of bits in the input data as well
Radix and Bucket are probably the best known examples of this. Without more information about your particular requirements it is not fruitful to consider these in more depth.

- 36,004
- 6
- 77
- 101
Quicksort is usually the fastest, but if you want good worst-case time, try Heapsort or Mergesort. These both have O(n log n)
worst time performance.

- 26,504
- 11
- 85
- 105
-
thank you for providing a straightforward answer rather than making things even more complex. – schlingel Mar 22 '18 at 11:31
If you have a gigantic data set (ie much larger than available memory) you likely have your data on disk/tape/something-with-expensive-random-access, so you need an external sort.
Merge sort works well in that case; unlike most other sorts it doesn't involve random reads/writes.

- 1,686
- 8
- 11
It largely is related to the size of your dataset and whether or not the set is already ordered (or what order it is currently in).
Entire books are written on search/sort algorithms. You aren't going to find an "absolute fastest" assuming a worst case scenario because different sorts have different worst-case situations.

- 37,429
- 10
- 86
- 110
It depends on the size, according to the Big O notation O(n).
Here is a list of sorting algorithms BEST AND WORST CASE for you to compare. My preference is the 2 way MergeSort

- 17,084
- 9
- 43
- 67

- 30,098
- 10
- 66
- 73
-
According to http://en.wikipedia.org/wiki/Sorting_algorithm, there are at least six comparison-sorting algorithms with O(n lg n) worst case (which is the theoretical minimum). – Michael Myers Apr 21 '09 at 15:51
-
true, which the link shows..there is a tie, but implementation Mergesort is what I prefer – TStamper Apr 21 '09 at 15:55
-
My favorite is the merge sort too. It's stable, has guaranteed worst case performance of O(n log n), is easy to understand and write, and is amenable to large data sets that don't fit into memory. – Mark Ransom Apr 22 '09 at 02:00
If you have a sufficiently huge data set, you're probably looking at sorting individual bins of data, then using merge-sort to merge those bins. But at this point, we're talking data sets huge enough to be VASTLY larger than main memory.
I guess the most correct answer would be "it depends".

- 20,782
- 4
- 54
- 70
It depends both on the type of data and the type of resources. For example there are parallel algorithms that beat Quicksort, but given how you asked the question it's unlikely you have access them. There are times when the "worst case" for one algorithm is "best case" for another (nearly sorted data is problematic with Quick and Merge, but fast with much simpler techniques).

- 12,814
- 10
- 39
- 55
Assuming randomly sorted data, quicksort.
O(nlog n) mean case, O(n^2) in the worst case, but that requires highly non-random data.
You might want to describe your data set characteristics.
-
2But the OP asked for absolute worst case -- in which case quicksort is N^2. – Rick Copeland Apr 21 '09 at 15:46
-
1The OQ was for fastest known sort algorithim in worst case. O(n^2) would not qualify. – T.E.D. Apr 21 '09 at 15:53
-
@Steve, maybe you're not getting what people are saying here, they are saying the question is what is the fastest algorithm in worst case scenario, not what are sorting algorithms worst case..quicksort does not fall in this category – TStamper Apr 21 '09 at 16:06
-
I do get it, and I get that this answer is not addressing that point, I'm just saying -1 seems a bit harsh because that answer is clear and qualifies that QS is indeed o(n^2) in worst case. – Steve Apr 21 '09 at 16:20
-
1Why would you presume to know what the author wants? Shouldn't the author be the authority on that? He seems to have spelled it out fairly clearly to me. – mqp Apr 21 '09 at 16:54
-
@mquander - because for a gigantic data set, the absolute worst case is incredibly unlikely to occur. It would be incredible that such a situation would commonly occur and so to optimize against it is bad practise. I think it much more likely the author should actually be concerned about mean behaviour, but that he doesn't realise it. – Apr 21 '09 at 18:16
See Quick Sort Vs Merge Sort for a comparison of Quicksort and Mergesort, which are two of the better algorithms in most cases.

- 1
- 1

- 179,021
- 58
- 319
- 408
It all depends on the data you're trying to sort. Different algorithms have different speeds for different data. an O(n) algorithm may be slower than an O(n^2) algorithm, depending on what kind of data you're working with.

- 18,459
- 5
- 42
- 51
I've always preferred merge sort, as it's stable (meaning that if two elements are equal from a sorting perspective, then their relative order is explicitly preserved), but quicksort is good as well.

- 182,639
- 35
- 285
- 343
-
Quicksort is O(n^2) in the worst case; modern implementations are usually designed to make that worst case exceedingly unlikely. Heapsort has O(n ln n) worst-case behavior, and requires O(1) additional memory. – David Thornley Apr 21 '09 at 17:03
The lowest upper bound on Turing machines is achieved by merge sort, that is O(n log n). Though quick sort might be better on some datasets.
You can't go lower than O(n log n) unless you're using special hardware (e.g. hardware supported bead sort, other non-comparison sorts).

- 2,684
- 15
- 10
-
Since the data for a turing machine is on tape, quick sort is going to be very slow compared to merge sort on almost all nontrivial datasets. – Captain Segfault Apr 21 '09 at 17:42
-
On the importance of specifying your problem: radix sort might be the fastest, but it's only usable when your data has fixed-length keys that can be broken down into independent small pieces. That limits its usefulness in the general case, and explains why more people haven't heard of it.
http://en.wikipedia.org/wiki/Radix_sort
P.S. This is an O(k*n) algorithm, where k is the size of the key.

- 299,747
- 42
- 398
- 622