0

I have been learning big o efficiency at school as the "go to" method for describing algorithm runtimes as better or worse than others but what I want to know is will the algorithm with the better efficiency always outperform the worst of the lot like bubble sort in every single situation, are there any situations where a bubble sort or a O(n2) algorithm will be better for a task than another algorithm with a lower O() runtime?

jwir3
  • 6,019
  • 5
  • 47
  • 92
  • 2
    Looking at a [table](http://en.wikipedia.org/wiki/Sorting_algorithm#Comparison_of_algorithms) of sorting algorithm performance, you can see there a best, average, and worst case scenarios for each sorting algorithm. – Grice Oct 14 '14 at 18:36
  • It's possible for some scenarios. Because `O(...)` does not take in account constant multiples. For example: `100*O(n)` is the same as `O(n)`. But as `n` grows higher that matters less and less, but can matter if `n` is small enough. Even to the extent of having say `O(n^2)` outperform `O(n)` for some computation. – Spencer Wieczorek Oct 14 '14 at 19:40

2 Answers2

3

Generally, O() notation gives the asymptotic growth of a particular algorithm. That is, the larger category that an algorithm is placed into in terms of asymptotic growth indicates how long the algorithm will take to run as n grows (for some n number of items).

For example, we say that if a given algorithm is O(n), then it "grows linearly", meaning that as n increases, the algorithm will take about as long as any other O(n) algorithm to run.

That doesn't mean that it's exactly as long as any other algorithm that grows as O(n), because we disregard some things. For example, if the runtime of an algorithm takes exactly 12n+65ms, and another algorithm takes 8n+44ms, we can clearly see that for n=1000, algorithm 1 will take 12065ms to run and algorithm 2 will take 8044ms to run. Clearly algorithm 2 requires less time to run, but they are both O(n).

There are also situations that, for small values of n, an algorithm that is O(n2) might outperform another algorithm that's O(n), due to constants in the runtime that aren't being considered in the analysis.

Basically, Big-O notation gives you an estimate of the complexity of the algorithm, and can be used to compare different algorithms. In terms of application, though, you may need to dig deeper to find out which algorithm is best suited for a given project/program.

jwir3
  • 6,019
  • 5
  • 47
  • 92
  • You might also want to take a look at: http://stackoverflow.com/a/12338937/281460 which does an excellent job at describing the differences between asymptotic analysis techniques. – jwir3 Oct 14 '14 at 18:42
  • 1
    I would add, as the OP evoked Bubble Sort (which is average O(n²) time complexity), that in some cases, Bubble Sort will perform better than for example QuickSort or Bucket Sort (which have better average case performance : O(n log n) for Quicksort and O(n) for Bucket Sort) would : for instance, on a short list of F.P., Bubble Sort will perform better. Also, if your lists are close-to-sorted (for example, you re-sort your list everytime you are adding an element) then, Bubble Sort might work faster than another sort algorithm. – servabat Oct 14 '14 at 18:47
  • ok thanks for the feedback just needed a little clarification. I also found a nice animation that described the small values condition you mentioned( or in this case values that are nearly sorted) and the bubble and insertion out perform the merge , quick and heap sort http://www.sorting-algorithms.com/nearly-sorted-initial-order – Doug_Brown Oct 14 '14 at 20:06
  • @Doug_Brown: If your question was answered, could you please accept my answer? On SO, it's good practice to accept an answer to your questions. This helps you generate rep over time, and also helps the person who answered your question generate rep. – jwir3 Oct 22 '14 at 16:53
  • its been almost a year later jwir3 sry for not accepting :) – Doug_Brown Oct 20 '15 at 15:32
  • @Doug_Brown No worries. Thanks for coming back and accepting it! – jwir3 Oct 20 '15 at 16:35
-2

Big O is gives you the worst cast scenario. That means that it assumes the input in in the worst possible It also ignores the coefficient. If you are using selection sort on an array that is reverse sorted then it will run in n^2 time. If you use selection sort on a sorted array then it will run in n time. Therefore selection sort would run faster than many other sort algorithms on an already sorted list and slower than most (reasonable) algorithms on a reverse sorted list.

Edit: sorry, I meant insertion sort, not selection sort. Selection sort is always n^2

Pat
  • 51
  • 1
  • 2
  • 9
  • 2
    You’re wrong. Big O just describes the upper boundary. But you can do this for any case, be it worst best or best case. – Gumbo Oct 14 '14 at 18:40