1

With many different choices of sorting algorithms, is it ever appropriate to use a higher complexity algorithm in any example?

The only reason I can think of is if there was a very short array, or array that was very close to being sorted, or only contained a few different numbers.

Dave
  • 454
  • 1
  • 7
  • 17
  • 1
    Is 2^n a typo for n^2? I don't think there's any sensible O(2^n) sorting algorithms. – Paul Hankin Jun 28 '16 at 04:21
  • I agree, but funnily enough I got this exact question in an exam paper recently using 2^n – Dave Jun 28 '16 at 04:51
  • It is unusual. Still, perhaps O(n^2) could make sense if the n elements are arranged in some multidimensional ordering with complicated requirements, or arguably you could deem something like arranging a set of numbers into a solution for a particular sudoku to be "sorting" them into a solution ordering.... – Tony Delroy Jun 28 '16 at 04:55
  • One reason when the implementation is much simpler than faster algorithms, and you want to use it in your unit testing to verify the more complex, faster implementations – samgak Jun 28 '16 at 04:58

1 Answers1

2

With many different choices of sorting algorithms, is it ever appropriate to use a higher complexity algorithm in any example?

It may be, if the big-O complexity of concern is a worst case you are certain you won't hit, or n is known small as you say (e.g. n is the number of white pawns left on a chess board) and constant factors are more important.

O(2^n) is extreme though... you also have to consider the stability of your reasons for using it - could someone (including you in the future) accidentally modify the code invalidating the suitability of the O(2^n) algorithm, and leaving the app to lock up sometimes when it's invoked and n is higher than originally anticipated or the data's less "friendly"?

For most things, it's worth putting the time in up front to create a reasonably efficient and hopefully reusable algorithm, and just not have to worry, but sometimes that may be massively more complex and error prone and the CPU or memory benefits just don't justify it. Ultimately, it's the impact - short- and long-term - on your actual project that matters.

Quite often when creating algorithmic code, there's a dead-easy, obvious way to solve it, and a complex but efficient way to solve it, and it's a good idea to quickly write the former code so you can use it to test the latter. The former may be called an "oracle" implementation, because it's trusted to know the truth: the right answer. If it also happens to be fast enough and you've limits of n or the data scenarios as discussed, you may not need to progress to the complex implementation.

Tony Delroy
  • 102,968
  • 15
  • 177
  • 252
  • 2
    Expanding on the "dead-easy, obvious way to solve it." You might have to do a one-time job that will take the n^2 algorithm six hours to solve after you put a few minutes into writing it. The efficient algorithm can solve the problem in a few minutes, but will take you an hour or two of development effort. You're probably better off with the "inefficient" algorithm because you can run it in the background while you're working on other tasks. I find myself in this type of situation pretty regularly, and have found that the "inefficient" algorithms often end up saving me time. – Jim Mischel Jun 28 '16 at 04:02