-1

Can someone shortly explain why an algorithm would be O(f(n)) and not Θ(f(n). I get that to be Θf(n)) it must be O(f(n)) and Ω(f(n)) but how do you know if a particular algorithm is Θf(n)) or O(f(n)). I think it's hard for me to not see big O as a worst case run time. I know it's just a bound but how is the bound determined. Like I can see a search in a binary search tree as running in constant time if the element is in the root, but I think this has nothing to do with big O.

Edgar Smith
  • 163
  • 1
  • 10
  • 2
    Possible duplicate of [Big O, how do you calculate/approximate it?](https://stackoverflow.com/questions/3255/big-o-how-do-you-calculate-approximate-it) – Mitch Mar 21 '19 at 23:10

2 Answers2

2

I think it is very important to distinguish bounds from cases.

Big O, Big Ω, and Big Θ all concern themselves with bounds. They are ways of defining behavior of an algorithm, and they give information on the growth of the number of operations as n (number of inputs) grows increasingly large.

In class the Big O of a worst case is often discussed and I think that can be confusing sometimes because it conflates the idea of asymptotic behavior with a singular worst case. Big O is concerned with behavior as n approaches infinity, not a singular case. Big O(f(x)) is an upper bound. It is a guarantee that regardless of input, the algorithm will having a running time no worse than some positive constant multiplied by f(x).

As you mentioned Θ(f(x)) only exists if Big O is O(f(x)) and Big Ω is Ω(f(x)). In the question title you asked how to determine if an algorithm is Big O or Big Θ. The answer is that the algorithm could be both Θ(f(x)) and O(f(x)). In cases where Big Θ exists, the lower bound is some positive constant A multiplied by f(x) and the upper bound is some positive constant C multiplied by f(x). This means that when Θ(f(x)) exists, the algorithm can perform no worse than C*f(x) and no better than A*f(x), regardless of any input. When Θ(f(x)) exists, you are given a guarantee of how the algorithm will behave no matter what kind of input you feed it. When Θ(f(x)) exists, so does O(f(x)). In these cases, it is valid to state that the running time of the algorithm is O(f(x)) or that the running time of the algorithm is Θ(f(x)). They are both true statements. Giving the Big Θ notation is just more informative, since it provides information on both the upper and lower bound. Big O only provides information on the upper bound. When Big O and Big Ω have different functions for their bounds (i.e when Big O is O(g(x)) and Big Ω is Ω(h(x)) where g(x) does not equal h(x)), then Big Θ does not exist. In these cases, if you want to provide a guarantee of the upper bound on the algorithm, you must use Big O notation.

Above all, it is imperative that you differentiate between bounds and cases. Bounds give guarantees on the behavior of the algorithm as n becomes very large. Cases are more on an individual basis.

-1

Let's make an example here. Imagine an algorithm BestSort that sorts a list of numbers by first checking if it is sorted and if it is not, sort it by using MergeSort. This algorithm BestSort has a best case complexity of Ω(n) since it may discover a sorted list and it has a worst case complexity of O(n log(n)) which it inherits from Mergesort. Thus this algorithm has no Theta complexity. Compare this to pure Mergesort which is always Θ(n log(n)) even if the list is already sorted.

I hope this helps you a bit.

EDIT

Since there has been some confusion I will provide some kind of pseudo code:

BestSort(A) {
If (A is sorted)  // this check has a complexity of O(n)
   return A;
else
   return mergesort(A); // mergesort has a complexity of Θ(n log(n))
}
Mr.Yellow
  • 692
  • 5
  • 15
  • The best case for merge sort is still `n*log(n)`. Bubble sort may be a better example to use here – Mitch Mar 21 '19 at 23:18
  • `sort it by using Merge Sort. This algorithm has a best case complexity of Ω(n)` – Mitch Mar 21 '19 at 23:21
  • Yes the algorithm which checks a given input for being sorted. This check takes n operations and is the best case thus giving the algorithm a best case complexity of O(n). If it is already sorted the algorithm will NOT use Mergesort. – Mr.Yellow Mar 21 '19 at 23:22
  • [Omega is a lower bound, not a best-case, and big-O is an upper bound, not a worst-case](https://cs.stackexchange.com/questions/23068/how-do-o-and-%CE%A9-relate-to-worst-and-best-case). – Bernhard Barker Mar 21 '19 at 23:44
  • Why would the best case complexity not be Θ(n)? So for this same algorithm, what would the average and worst case time complexities be? – Edgar Smith Mar 21 '19 at 23:52