1

I was doing some reading on logarithms and the rate of growth of the running time of algorithms.

I have, however, a problem understanding the Big-Ω (Big-Omega) notation.

I know that we use it for 'asymptotic lower bounds', and that we can express the idea that an algorithm takes at least a certain amount of time.

Consider this example:

var a = [1,2,3,4,5,6,7,8,9,10];

Somebody chooses a number. I write a program that tries to guess this number using a linear search (1, 2, 3, 4... until it guesses the number).

I can say that the running time of the algorithm would be a function of the size of its input, so these are true (n is the number of elements in an array):

  1. Θ(n) (Big-Theta notation - asymptotically tight bound )
  2. O(n) (Big-O notation - upper bounds)

When it comes to Big-Ω, in my understanding, the algorithm's running time would be Ω(1) since it is the least amount of guesses needed to find the chosen number (if, for example, a player chose 1 (the first item in the array)).

I reckon this bacause this is the defintion I found on KhanAcademy:

Sometimes, we want to say that an algorithm takes at least a certain amount of time, without providing an upper bound. We use big-Ω notation; that's the Greek letter "omega."

Am I right to say that this algorithm's running time is Ω(1)? Is it also true that it is Ω(n), if yes - why ?

psmears
  • 26,070
  • 4
  • 40
  • 48
luqo33
  • 8,001
  • 16
  • 54
  • 107
  • possible duplicate of [Big O, how do you calculate/approximate it?](http://stackoverflow.com/questions/3255/big-o-how-do-you-calculate-approximate-it) – Ken White Jun 22 '15 at 15:09
  • You can say that it is in Omega(1), Omega(n^1/2), Omega(logn), Omega(n). You can't say that it is in Omega(n^2), Omega(n^n). – ROMANIA_engineer Jun 22 '15 at 15:16
  • You can specify the upper and lower bound for any case, be it best case (Ω(1) and O(1), thus Θ(1)), worst case (Ω(n) and O(n), thus Θ(n)), or average case (Ω(n) and O(n), thus Θ(n)). But in general, you are only interested in the upper bound of worst case (and sometimes average case). – Gumbo Jun 22 '15 at 16:28

1 Answers1

2

Big O,Theta or Omega notation all refer to how a solution scales asymptotically as the size of the problem tends to infinity, however, they should really be prefaced with what you are measuring.

Usually when one talks about big O(n) one usually means that the worst case complexity is O(n), however, one does sometimes see it used for typical running times, particularly for heuristics or algorithms which have an element of randomness or are not strictly guaranteed to converge at all.

So here, we are presumably talking about the worst case complexity, which is Theta(n), since it is Theta(n), its also O(n) and Omega(n).

One way to prove a lower bound when its unknown is to say that X is the easiest case for this algorithm, here the best case is O(1), so therefore we can say that the algorithm takes at lease Omega(1) and at most O(n), and Theta is unknown, and that is correct usage, but the aim is to get the highest possible bound for Omega which is still true, and the lowest possible bound for O(n) which is still true. Here Omega(n) is obvious, so its better to say Omega(n) than Omega(1), just as its better to say O(n) rather than O(n^2).

phil_20686
  • 4,000
  • 21
  • 38
  • Thanks for your answer phil. "its better to say Omega(n) than Omega(1), just as its better to say O(n) rather than O(n^2)" - isn't O(n^2) incorrect in this case? The running time of this algorithm would not grown exponentially. – luqo33 Jun 22 '15 at 15:34
  • No, O() refers to an upper bound, therefore if an algorithm is O(n) it is also O(n^2) (polynomial) and O(2^n) (exponential) etc. Any function larger than n is also bounds the algorithm correctly. O(f(n)) means that the algorithm has a running time which scales more slowly than f(n). So if g(n)>f(n) O(f(n)) implies O(g(n)) – phil_20686 Jun 22 '15 at 15:38
  • Yes, now it's clear for the upper bound. However, coming back to the lower bound. If we say that Ω(n) is the highest possible bound for Omega, is this a valuable information for us really? Wouldn't it be better to describe this algorithm like: Θ(n) (Theta) and Ω(1). I seem to get more precise information this way - I know that the running times depends on the size of input, but the least guesses that can win is 1. – luqo33 Jun 22 '15 at 16:12
  • To say Theta(n) implies O(n) and Omega(n). If you say that the algorithm has O(n) and Omega(1) that means that you only know that typical worst case performance is between n and 1. That is clearly less useful than knowing that its always n, which is what O(n) and Omega(n) would mean. You are trying to express that the worst case is Theta(n) and the best case is Theta(1). – phil_20686 Jun 22 '15 at 17:32