Any Algorithms example when do we prefer Big O(n^2) time complexity over the O(n logn)? I have seen this question somewhere but did not find answer.

- 779
- 6
- 17
-
7Possible duplicate of [Are there any cases where you would prefer a higher big-O time complexity algorithm over the lower one?](http://stackoverflow.com/questions/34179968/are-there-any-cases-where-you-would-prefer-a-higher-big-o-time-complexity-algori) – Andrew Elkouri Feb 05 '16 at 15:49
-
Is this about general reasons to pick a "slower" algorithms (in this case: yes, duplicate), or about an actual, real-world example? – tobias_k Feb 05 '16 at 15:54
4 Answers
For a large problem, O(n log n) will always beat O(n^2). For a small problem, the constant factors hidden by the big-O notation may cause you to prefer the O(n^2) algorithm. For instance, the O(n log n) quicksort is faster than the O(n^2) insert sort, but some quicksort implementations switch to insert sort when the partitions get small (less than ten elements).

- 17,381
- 4
- 34
- 59
-
Very true, people tend to forget the lead-in sentence of all Big-O claims: "There is an `n0` so that for all `n > n0`...". Also, this `n0` can be surprisingly large. – Ulrich Eckhardt Feb 05 '16 at 16:06
-
Yes, a similiar question recently; http://stackoverflow.com/questions/35218322/are-selection-or-insertion-sort-useful-outside-of-academic-environments/35218436#35218436 – Aravind Feb 05 '16 at 16:18
-
Quicksort is O(n^2) in the worst-case (although it's fairly simple to avoid the worst-case inputs); but to use a different example, quicksort often performs better than actually O(n lg n) algorithms like heapsort and mergesort. – chepner Feb 05 '16 at 20:35
There are several reasons to choose an algorithm with a higher time complexity:
- Speed: The asymptotic complexity only applies to values of n greater than some n_0. Also, it assumes a certain machine underneath which only partially matches real machines with multiple levels of cache and constrained memory.
- Space: Some algorithms require more space than others, and thus become impossible to implement. Also, this may simply influence the speed on a real machine. For example, locality of references has influence on cache hits or misses, which is the reason why Quicksort performs better than Mergesort.
- Implementation complexity: In some cases the loss in performance is simply negligible, but the development time isn't.

- 16,572
- 3
- 28
- 55
Many naive O(n^2) algorithms are faster on small inputs than their more complicated O(n log(n)) brethren.
For example, the GNU MP Bignum library has a very highly optimized multiplication implementation. But for numbers made up of a couple dozen words it just uses schoolbook multiplication (the best threshold depends heavily on the machine). In fact GMP transitions through a whole sequence of fastest-around-size-X algorithms.

- 17,763
- 5
- 68
- 136
One possibility - the O(n logn) algorithm is recursive, but you can program the O(n^2) iteratively, and your programming language that you must use does not support recursion.
"Preferred" is relative here BTW. If the data-set was large enough, you COULD emulate recursion by using your own stack variable that you manipulate in a version of the "recursive" algorithm that was implemented iteratively (we had to do that exercise in Guy Steele's Comparative Programming class at CMU back-in-the-day).

- 3,088
- 2
- 23
- 43
-
2Unless I'm very much mistaken, all recursive algorithms can be rewritten as iterative algorithms, they just aren't always as simple and nice looking – Kevin Feb 05 '16 at 16:45