1

I am taking a Data Structure course this semester and I cannot understand the definition of big O notation.

The definition says, f(n) = O(g(n)) if there exist positive constant C and n0 such that
f(n)<=C*g(n) for all n > n0. I understand why n > n0, it's talking about from the point n0, f(n) is always smaller than c*g(n). But I cannot understand why f(n) is compared to C*g(n) but not just g(n).

Can anyone explain please?

Idos
  • 15,053
  • 14
  • 60
  • 75
zodiac
  • 75
  • 7
  • Thank you for your help. But why we have to influence the trend with c? Why not just simply compare f(n) and g(n). – zodiac Jan 18 '16 at 13:12

3 Answers3

1

I cannot understand why f(n) is compared to C*g(n) but not just g(n)

Because it represents the order of execution time. The constant C is more or less meaningless when comparing algorithms in this respect.

Consider two sorting algorithms. Bubble sort is O(n^2), whereas Quick sort is O(n log(n)). Both of these take time that is proportional to their order, but there will be some constant that you multiply by to get a reasonable approximation of the running time of the algorithm (and it may be a different value constant between the two).

However, for any values C for each of the algorithms, there will be some point (some value of n) beyond which Quick sort is always faster. This is what "big O notation" is all about. The constant C doesn't matter when you look at the bigger picture, and that's why it's disregarded.

davmac
  • 20,150
  • 1
  • 40
  • 68
1

Because it allows for a much more succint and useful comparison.

Using C makes the definition such that it says

f grows at roughly (asymptotically) the same speed as g (or slower)

Without the C, we'd lose the "roughly (asymptotically)" part. Without the C, we couldn't just say f = O(n^2), we'd have to say something like f = O(1.7 n^2) (unless the factor happened to be 1, of course). That's not particularly useful.

Big O is generally used to talk about algorithm classes, and about ratios. f = O(n^2) says: "When you scale n twice, the computation scales 4 times." This would still be true for O(4 n^2) or O(1.7 n^2) or even O(0.01 n^2).

The entire point of the Big O notation is to express asymptotic complexity—what trends there are when n gets large. We don't care whether it takes 4-times as much or half-as much at the same n; we care how it scales when n scales, and such scaling is invariant to a multiplicative constant.

Not to mention the fact that fixing the exact constant would be really difficult in specific cases. It's generally easy to show that e.g. an algorithm performs roughly n operations for each bit of input, and so has n^2 complexity. But it would be quite painful to analyse whether it performs 3 n operations for n / 2 input elements and 2 n operations for the other n / 2 elements, or something else.

Angew is no longer proud of SO
  • 167,307
  • 17
  • 350
  • 455
-1
big o = Worst,
BIG omega = Normal,
BIG theta = Good,

they just represent the scenario like BIG O means worst case scenario
For example in a loop 

    > For (int i =0 ;i <10000; i ++) 
    > break on some case 


BIG O will be 10000 if the break condition is not found.