1

I am a little confused in a specific case with the Big O notation and the Asymptotic behavior of algorithms. While I was reading the blog http://discrete.gr/complexity/ that describes these notations very well I came across this statement whether it is true or false:

A O( n ) algorithm is Θ( 1 )

The answer says that this may or may not be true depending on the algorithm. In the general case it's false. If an algorithm is Θ( 1 ), then it certainly is O( n ). But if it's O( n ) then it may not be Θ( 1 ). For example, a Θ( n ) algorithm is O( n ) but not Θ( 1 ).

I am trying a little hard to comprehend this answer. I understand that Big O implies that a program can asymptotically be no worse. So I interpret that above statement where O( n ) is worse than Θ( 1 ) and is true.

Can someone explain with an example?

Venkatesh
  • 79
  • 1
  • 1
  • 10
  • 2
    1 ∈ O(1), 1 ∈ O(n), 1 ∈ Θ(1), n ∈ O(n), n ∈ Θ(n), n ∉ Θ(1). Very informally (disregarding constant factors), Theta is equal, Big-O is less-than-or-equal - something that's <= n is not necessarily 1. – Bernhard Barker Jan 04 '18 at 20:44
  • @Dukeling: Good comment. Note that the answer for the linked duplicate is wrong. Even if it has more than 500 upvotes :-/ – Eric Duminil Jan 05 '18 at 15:07

3 Answers3

3
  • If you know that a task requires exactly one week (= Θ(1)), you can surely say that you can do it in at most a year (= O(n)).

  • If you know that a task requires at most a year (= O(n)), you cannot be sure that you'll do it in a week (= Θ(1)).

  • If you know that a task requires exactly one year (= Θ(n)), you could also say that it requires at most a year (= O(n)), and you're sure it cannot be done in a week (≠ Θ(1)).

Eric Duminil
  • 52,989
  • 9
  • 71
  • 124
2

Consider the optimized bubble sort, which has an asymptotically tight upper bound of O(n2), and an asymptotically tight lower bound of Ω(n) (when the array is already sorted). The arrangement of items determines how many operations the algorithm will have to perform.

Contrast that to summing a list of integers. In this case, you always have to look at each item in the list exactly once. The upper bound is O(n), and the lower bound is Ω(n). There is no arrangement of items that will change the complexity of the algorithm. In this case, when the tight upper and lower bounds are the same, we say that the algorithmic complexity is Θ(n).

If an algorithm is Θ(n), then the complexity will never exceed O(n), and it will never be less than O(n). So it can't possibly be O(1) or Θ(1).

But if an algorithm is described as O(n), it could be Ω(1). For example, finding the first non-zero value in a list of integers. If the first item in the list is non-zero, then you don't have to look at any other numbers. But if the list is all zeroes, you end up looking at them all. So the upper bound is O(n) and the lower bound is Ω(1).

Jim Mischel
  • 131,090
  • 20
  • 188
  • 351
  • Thanks for proving these examples. So from what I understand, if an algorithm is described in theta notation then most likely the upper bound Big-O and lower bound Omega are going to be the same: which means the input to this algorithm will not affect the run time and in contrast if an algorithm is described in Big-O notation then depending the input of the algorithm its run time may vary. Is my understanding correct? – Venkatesh Jan 05 '18 at 07:35
  • If theta notation is used then *by definition*, the upper and lower bound are the same. Not "most likely." If an algorithm is described as O(n), with no omega, then all you know is the upper bound, and the actual complexity might vary with input. – Jim Mischel Jan 05 '18 at 14:48
0

The example basically tries to cover two ideas:

  1. If an algorithm is Θ(f(n)), it means it is both Ω(f(n)) and O(f(n)). It is asymptotically neither better nor worse than f(n), it has the same asymptotic behavior.
  2. Functions that areO(1) can be seen as a subset of functions that are O(n). This can be generalized but no need to get too formal here I guess to avoid mathematically incorrect disasters from my side. It means if it can never do worse than a constant, then it can never do worse than a linear function.

So the idea is to break up the Θ to O and Ω notations. And then identify which is subset of which.

Also it's nice to notice that ANY algorithm (that has a non null complexity at least) is Ω(1). An algorithm can never do better than a constant.

Example Mammals are humans.

The answer: no, not in general. All humans are mammals though. Humans are a subset of mammals.

I tried to think of another example but it was either too technical or not clear enough. So I'll leave here this not so well drawn but rather clear graph here. I found by googling o omega theta and looking for images. There are a few other good images too.

graph
(source: daveperrett.com)

Basically, for each function f in this graph: Anything above it is Ω(f(n)) because it can never do better than f, it can never be below it as n increases; Anything below it is O(f(n)) because it can never be worse than f, it can never be above it as n increases.

The graph doesn't show well the insignificance of constants asymptotically. There are other graphs that show it better. I just put it here because it had lots of functions at a time.

Glorfindel
  • 21,988
  • 13
  • 81
  • 109