I'm not sure where you found those definitions, but I'd consider them to be incorrect.
The best, average and worst cases are all functions generally over the input size.
Big-O, Theta and Omega indicate, respectively, the upper, tight and lower bounds of any given function.
That is to say the best-case has a big-O, Theta and Omega bound. The same goes for the average and worst case.
See also: How do O and Ω relate to worst and best case?
Note: big-O is commonly (arguably incorrectly) used to indicate a tight bound (instead of Theta).
Let's consider the example of insertion sort.
The best case is when it's already sorted, in which case it'll take linear time, that is f(n) = k1n
time for some constant k1
.
k1n
is O(n), Θ(n), Ω(n). According to the definitions, we can also say it's O(n2), O(n3), ... or Ω(1), Ω(log n), Ω(log log n), ..., but it's commonly expected to present the tightest bound.
The worst and average cases are g(n) = k2n2
and h(n) = k3n2
, which are both O(n2), Θ(n2), Ω(n2).
Now you might be saying: that isn't very useful, why do we need three bounds if they're always the same? Why don't we just use Theta everywhere?
In general, you'd be absolutely right - algorithms are often described in terms of just one of the bounds (typically the tight bound).
However, for some algorithms it's difficult to figure out exactly what the tight bound is, while it's easy to get an upper and/or lower bound. The unoptimised algorithm to calculate Fibonacci numbers is one such example - it's not too hard to find an upper bound of O(2n) and a lower bound of Ω(1.5n), but the tight bound of ~θ(1.6n) is a lot more difficult to calculate.