0


I was doing study on Asymptotic Notations Topic, i recon that its formula is so simple yet it tells nothing and there are couple of things i don't understand.
When we say

f(n) <= c.g(n) where n >= n₀

And we don't know the value of c =? and n=? at first but by doing division of f(n) or g(n) we get the value of c. (here is where confusion lies)
First Question: How do we decide which side's 'n' has to get divided in equation f(n) or g(n)?

Suppose we have to prove:

2n(square) = O(n(cube))

here f(n) = 2(n(square)) and g(n)=n(cube)
which will form as:

2(n(square)) = c . n(cube)

Now in the notes i have read they are dividing 2(n(square)) to get the value of c by doing that we get c = 1;
But if we do it dividing n(cube) [which i don't know whether we can do it or not] we get c = 2;
How do we know what value we have to divide ?

Second Problem: Where does n₀ come from what's its task ?
Well by formula we know n >= n(0) which means what ever we take n we should take the value of n(0) or should be greater what n is.
But i am confuse that where do we use n₀ ? Why it is needed ?
By just finding C and N can't we get to conclusion if n(square) = O(n(cube)) or not. Would any one like to address this? Many thanks in advance.
Please don't snub me if i ask anything stupid or give -1. Address it please any useful link which covers all this would be enough as well:3
I have gone through the following links before posting this question this is what i understand and here are those links: http://openclassroom.stanford.edu/MainFolder/VideoPage.php?course=IntroToAlgorithms&video=CS161L2P8&speed=
http://faculty.cse.tamu.edu/djimenez/ut/utsa/cs3343/lecture3.html
https://sites.google.com/sites/algorithmss15

Habib Rehman
  • 590
  • 3
  • 15
  • 36
  • Where did you see the `n(not)` notation? Are you perhaps hearing “n [naught](https://en.wiktionary.org/wiki/naught#Noun)” (a way to read *n₀* out loud)? – Lynn Aug 04 '16 at 08:01
  • @Lynn just changed it, i meant n₀ – Habib Rehman Aug 04 '16 at 08:06
  • It not so easy to understand where you're stuck. Perhaps this question helps you? http://stackoverflow.com/questions/25657108/constants-in-the-formal-definition-of-big-o?rq=1 – Paul Hankin Aug 04 '16 at 08:34
  • Thanks for providing the link @PaulHankin i understand what he said in the question mainly i got it by sketching graph of upper bounds c.g(n) so by no way doing C so large blow f(n) as formula said that so. The thing is constants can vary and after varying it may still effect the upper bound. And the n₀ thing is above my head. – Habib Rehman Aug 04 '16 at 08:45
  • 1
    The *n* >= n₀ means that whatever you say in the lemma, can be false for some first *k* (countable) parameters *n'* (*n'* < n₀), but since some n₀ the lemma is true for all the rest of (bigger) integers. It basically says that you don't care about first few integers ("first few" can be as little as 1e400, or 1e400000, ...etc... from the theory point of view), and you only care about the bigger (big enough) n values. – Ped7g Aug 04 '16 at 17:10
  • 1
    Why do you think that both sides always need to get divided by n? Just because they did that in one example doesn't mean that's always what you're going to do. What self-study have you done? Have you tried reading a different textbook to see if that helps explain it to you more clearly? There are *lots* of resources out there that explain this material. It looks to me like spending a little more time reading another explanation of the material from a different perspective might help you understand what's going on. – D.W. Aug 04 '16 at 17:22
  • 1
    Why n₀ may be needed. Let's say f(n) = n, g(n) = -9 + n^2. Now let's prove the f(n) < g(n) ... for n=100: 100 < 9991, *yay*, works. But for n=2: 2 < -5, problem. So to deal with it, changing the lemma wording to f(n) < g(n) when n >= n₀ will fix that, for example for n₀ = 4 (let's prove that one by "obvious" for the sake of example :) )... if you continue with those lemmas in similar way, proving with them some main problem, you will end with some proof which holds for the n >= max(n₀ of each lemma used), QED. ;) – Ped7g Aug 04 '16 at 17:22
  • Thanks for your value able comments @D.W. and Ped7g it really helps me to understand asymptotic notation i may need some practice with questions now. Thanks again – Habib Rehman Aug 04 '16 at 18:37

1 Answers1

2

From the second url in your question:

Let's define big-Oh more formally:

O(g(n)) = { the set of all f such that there exist positive constants c and n0 satisfying 0 <= f(n) <= cg(n) for all n >= n0 }.

This means, that for f(n) = 4*n*n + 135*n*log(n) + 1e8*n the big-O is O(n*n). Because for large enough c and n0 this is true:
4*n*n + 135*n*log(n) + 1e8*n = f(n) <= O(n*n) = c*n*n

In this particular case the [c,n0] can be for example [6, 1e8], because (this is of course not valid mathematical proof, but I hope it's "obvious" from it):
f(1e8) = 4*1e16 + 135*8*1e8 + 1e16 = 5*1e16 + 1080*1e8 <= 6*1e16 = 6*1e8*1e8 =~= O(n*n). There are of course many more possible [c,n0] for which the f(n) <= c*n*n holds true, but you need to find only one such pair to prove the f(n) has O(f(n)) of O(n*n).

As you can see, for n=1 you need quite a huge c (like 1e9), so at first look the f(n) may look much bigger than n*n, but in the asymptotic notion you don't care about the first few initial values, as long as the behaviour since some boundary is as desired. That boundary is some [c,n0]. If you can find such boundary ([6, 1e8]), then QED: "f(n) has big-O of n*n".


The n >= n₀ means that whatever you say in the lemma can be false for some first k (countable) parameters n' : n' < n₀, but since some n₀ the lemma is true for all the rest of (bigger) integers.

It says that you don't care about first few integers ("first few" can be as "little" as 1e400, or 1e400000, ...etc... from the theory point of view), and you only care about the bigger (big enough, bigger than n₀) n values.


Ultimately it means, that in the big-O notation you usually write the simplest and lowest function having the same asymptotic notion as the examined f(n).

For example for any f(n) of polynomial type like f(n) = ∑aini, i=0..k the O(f(n)) = O(nk).
So I did throw away all the lower 0..(k-1) powers of n, as they stand no chance against nk in the long run (for large n). And the ak does lose to some bigger c.

In case you are lost in that i,k,...:
f(n) = 34n4 + 23920392n2 has O(n4).

As for large enough n that n4 will "eclipse" any value created from n2. And 34n4 is only 34 times bigger than n4 => 34 is constant (relates to c) and can be omitted from big-O notation too.

Ped7g
  • 16,236
  • 3
  • 26
  • 63