5

I really want to know the real definition. I have tried to read a book, but couldn't understand it.

O: Big-O notation worst case.
Θ: Theta notation average case.
Ω: Omega notation best case.

Why does Wikipedia represent the speed of algorithms just in Big-O including its average, best and worst cases? How come they did not replace with those formal keywords?

Bernhard Barker
  • 54,589
  • 14
  • 104
  • 138
Yoon Lee
  • 2,029
  • 22
  • 27
  • 2
    Mostly people just care about the O since it is the worst case that often matters the most. – Cine Jan 10 '11 at 10:47
  • [Plain English explanation of Big O](http://stackoverflow.com/questions/487258/plain-english-explanation-of-big-o) – royhowie Dec 05 '14 at 04:15

2 Answers2

12

O, Θ and Ω do not represent worst, average and best case ; although they have a similar meaning.

Big-O notation f(n) = O(g(n)) means f grows slower than g for large values of n ("n > n0" means "for large values of n" in this context). This does not mean that g is the worst case: g could be worse than the worst case (quick sort is also O(n!) for instance). For the more complicated algorithms, there is ongoing research to determine the smallest Big-O for their actual complexity: the original author mostly finds a Big-O upper-bound.

Ω notation means the reverse (f grows faster than g), which means it could be better than the best case (all algorithms are Ω(1) for instance).

There are many algorithms for which there is no single function g such that the complexity is both O(g) and Ω(g). For instance, insertion sort has a Big-O lower bound of O(n²) (meaning you can't find anything smaller than n²) and an Ω upper bound of Ω(n).

Other algorithms do: merge sort is both O(n log n) and Ω(n log n). When this happens, it is written as Θ(n log n), which means that not all algorithms have a Θ-notation complexity (and notably, algorithms with worst cases or best cases do not have one).

To get rid of worst cases that carry a very low probability, it's fairly common to examine average-case complexity - replacing the standard "f" value on the left-hand side by a different function "favg" that only takes into account the most probable outcomes. So, for quick sort, f = O(n²) is the best you can get, but favg = O(n log n).

Victor Nicollet
  • 24,361
  • 4
  • 58
  • 89
  • 1
    This answer is incorrect. Best, average and worst cases are functions and O, Θ and Ω are bounds on these functions. You can represent the best case in terms of O, Θ or Ω. This is why Wikipedia lists big-O for all cases instead of using different notation for each. See the answer I posted below. – Bernhard Barker Sep 09 '17 at 13:13
  • *"For instance, insertion sort has a Big-O lower bound of O(n²) (meaning you can't find anything smaller than n²) and an Ω upper bound of Ω(n)."* This is incorrect, and it contradicts your first (correct) sentence: *"O, Θ and Ω do not represent worst, average and best case"*. Insertion sort has a time complexity of Θ(n) in the best case, and Θ(n^2) in the average and worst cases. Also, every function f is trivially O(f), Θ(f) and Ω(f), so this also contradicts your claim that *"There are many algorithms for which there is no single function g such that the complexity is both O(g) and Ω(g)."* – kaya3 Oct 17 '21 at 13:53
5

I'm not sure where you found those definitions, but I'd consider them to be incorrect.

The best, average and worst cases are all functions generally over the input size.

Big-O, Theta and Omega indicate, respectively, the upper, tight and lower bounds of any given function.

That is to say the best-case has a big-O, Theta and Omega bound. The same goes for the average and worst case.

See also: How do O and Ω relate to worst and best case?

Note: big-O is commonly (arguably incorrectly) used to indicate a tight bound (instead of Theta).


Let's consider the example of insertion sort.

The best case is when it's already sorted, in which case it'll take linear time, that is f(n) = k1n time for some constant k1.

k1n is O(n), Θ(n), Ω(n). According to the definitions, we can also say it's O(n2), O(n3), ... or Ω(1), Ω(log n), Ω(log log n), ..., but it's commonly expected to present the tightest bound.

The worst and average cases are g(n) = k2n2 and h(n) = k3n2, which are both O(n2), Θ(n2), Ω(n2).


Now you might be saying: that isn't very useful, why do we need three bounds if they're always the same? Why don't we just use Theta everywhere?

In general, you'd be absolutely right - algorithms are often described in terms of just one of the bounds (typically the tight bound).

However, for some algorithms it's difficult to figure out exactly what the tight bound is, while it's easy to get an upper and/or lower bound. The unoptimised algorithm to calculate Fibonacci numbers is one such example - it's not too hard to find an upper bound of O(2n) and a lower bound of Ω(1.5n), but the tight bound of ~θ(1.6n) is a lot more difficult to calculate.

Bernhard Barker
  • 54,589
  • 14
  • 104
  • 138