5

I am studying big O notation from this book.

The deffinition of big O notation is:

We say that f (x) is O(g(x)) if there are constants C and k such that |f (x)| ≤ C|g(x)| whenever x > k.

Now here is the first example:

EXAMPLE 1 Show that f (x) = x^2 + 2x + 1 is O(x^2).
Solution: We observe that we can readily estimate the size of f (x) when x > 1 because x 1. It follows that
0 ≤ x^2 + 2x + 1 ≤ x^2 + 2x^2 + x^2 = 4x^2
whenever x > 1. Consequently, we can take C = 4 and k = 1 as witnesses to show that f (x) is O(x^2). That is, f (x) = x^2 + 2x + 1 1. (Note that it is not necessary to use absolute values here because all functions in these equalities are positive when x is positive.)

I honestly don't know how they got c = 4, looks like they jump straight to the equation manipulation and my algebra skills are pretty weak. However, I found another way through [The accepted answer to this question])What is an easy way for finding C and N when proving the Big-Oh of an Algorithm?) that says to add all coefficients to find c if k = 1. So x^2+2x+1 = 1+2+1 = 4.

Now for k = 2, I'm completely lost:

Alternatively, we can estimate the size of f (x) when x > 2. When x > 2, we have 2x ≤ x^2 and 1 ≤ x^2. Consequently, if x > 2, we have
0 ≤ x^2 + 2x + 1 ≤ x^2 + x^2 + x^2 = 3x^2.
It follows that C = 3 and k = 2 are also witnesses to the relation f (x) is O(x^2).

Can anyone explain what is happening? What method are they using?

Community
  • 1
  • 1
lightning_missile
  • 2,821
  • 5
  • 30
  • 58
  • Possible duplicate of [What is an easy way for finding C and N when proving the Big-Oh of an Algorithm?](http://stackoverflow.com/questions/1369366/what-is-an-easy-way-for-finding-c-and-n-when-proving-the-big-oh-of-an-algorithm) – axiac Dec 15 '15 at 16:32
  • 1
    @axiac I referenced that link to my question. – lightning_missile Dec 15 '15 at 16:37
  • FYI, there are separate sites for [theoretical computer science](http://cstheory.stackexchange.com/) and [computer science](http://cs.stackexchange.com/), the latter of which is still in beta/early phases. – ashes999 Dec 15 '15 at 17:24
  • @morbidCode If the answer below answers your question, please mark you question as answered. Otherwise, please give feedback or more details to see if we can answer any possible remaining questionmarks for you. – dfrib Dec 18 '15 at 20:06

1 Answers1

3

First alternative:

C=4?

The C=4 come from the inequalities

0 ≤ x^2 + 2x + 1 ≤ x^2 + 2x^2 + x^2 = 4x^2 = C*x^2, with C=4   (+)

The second inequality in (+) is true for all x greater than 1, since, term by term

2x < 2x^2, given x>1
1 < x^2, given x>1

k = 1? From above, we've shown that (+) holds as long as x is larger than 1, i.e.

(+) is true given x > k, with k=1

Second alternative:

k=2?

By the statement, we want to study f(x) for x larger than 2, i.e.

Study f(x) for x > k, k=2

Given x > 2, it's apparent that

0 ≤ x^2 + 2x + 1 ≤ x^2 + x^2 + x^2 = 3x^2 = C*x^2, with C=3 (++)

since, for x>2, we have

2x = x^2 given x=2 ==> 2x < x^2 given x>2
for x=2, 1 < x^2 = 4, so 1 < x^2 for all x>2

Both examples show that f(x) is O(x^2). By using your constants C and k, recall that then Big-O notation for f(x) can be summarized as something along the lines

... we can say that f(x) is O(g(x)) if we can find a constant C such that |f(x)| is less than C|g(x)| or all x larger than k, i.e., for all x>k. (*)

This, by no means, implies that we need to find a unique set of (C, k) to prove that some f(x) is some O(g(x)), just some set (C, k) such that (*) above holds.

See e.g. the following link for some reference on how to specify the asymptotic behaviour of a function: https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/big-o-notation

dfrib
  • 70,367
  • 12
  • 127
  • 192
  • So how did they compute x^2 + 2x^2 + x^2 from x^2 + 2x + 1? And how did they get x^2 + x^2 + x^2 from x^2 + 2x + 1? – lightning_missile Dec 15 '15 at 17:05
  • For your first question, please see the equations following the text "... since, term by term" in my answer above. For your second question, please see the equations following the text "since, for x>2, we have" in my answer above. For computing inequality relations, usually you can look at the initial expression term by term (as has been done above); if say two terms, say A and B, in some expression is always smaller than two other terms, say C and D, then exchanging the two will yield and inequality on the form " ... + A + B < ... + C + D", where reasoning have been performed term by term. – dfrib Dec 15 '15 at 17:09
  • I think I sort of understand it. Does this mean there are no sure method to find c and k? – lightning_missile Dec 15 '15 at 18:08
  • A better way to describe it is that there is no unique way to C and k; the set of different "solutions", in this context, is degenerate, in the sense that there exists more than one (C,k) that "solves" the problem (of showing f(x) is in O(g(x))). – dfrib Dec 15 '15 at 18:29