9

I know this isn't strictly a programming question, but it is a computer science question so I'm hoping someone can help me.

I've been working on my Algorithms homework and figuring out the Big-Oh, Big-Omega, Theta, etc, of several algorithms. I'm proving them by finding their C and N0 values and all is going well.

However, I've come across my last two problems in the set and I'm struggling figuring out how to do them (and google isn't helping much).

I haven't had to figure out the Big-Oh/Omega of summations before.

My last two problems are:

  • Show that Σ (i=1 to n) of i2 is O(N3)

and

  • Show that Σ (i=1 to n) of [log2i] is Ω(n log n)

My question is, How do I show that?

For example, in the first one, intuitively I can't see how that summation of i2 is O(N3). The second one confuses me even more. Can someone explain how to show the Big-Oh and and Big-Omega of these summations?

Bill the Lizard
  • 398,270
  • 210
  • 566
  • 880

5 Answers5

1

http://en.wikipedia.org/wiki/Big_O_notation

N repetitions of g(m)=O(f(m)) is O(N*f(m)) for any f(N).

Sum of i=1..N of i*g(i) is O(N*f(N)) if g(n)=O(f(n)) and f is monotonic.

Definition: g(n)=O(f(n)) if some c,m exist so for all n>m, g(n)<=c*f(n)

The sum is for i=1..N of i*f(i).

If f is monotonic in i this means every term is <= if(N) <= Nf(N). So the sum is less than N*c*f(N) so the sum is O(N*f(N)) (witnessed by the same c,m that makes g(n)=O(f(n)))

Of course, log_2(x) and x^2 are both monotonic.

Jonathan Graehl
  • 9,182
  • 36
  • 40
  • What I wrote does apply to the question, if you take the fact that g(x)=x^2 is indeed O(x^2) and g(log_2(x) is indeed O(log x). – Jonathan Graehl Sep 04 '09 at 05:17
1

My guess is that what the question statement means is if you're summing the results of some calculation for which the running time is proportional to i2 in the first case, and proportional to log2i in the second case. In both cases, the running time of the overall summation is "dominated" by the larger values of N within the summation, and thus the overall big-O evaluation of both will be N*O(f) where f is the function you're summing.

Amber
  • 507,862
  • 82
  • 626
  • 550
  • 1
    O(f) is just a notation for functions; you don't have to bring in running times. :-) – ShreevatsaR Sep 05 '09 at 03:07
  • O(f) is a notation for *how the running times of functions scale* - I'm not sure what you mean about "bringing in running times", as soon as you're talking about O-notation you're already talking about running times in some sense. – Amber Sep 05 '09 at 08:00
  • 1
    No, O(f) is a notation for the growth of a function. E.g., the function f(n) = n^2+3*n+5 is O(n^2). This has nothing to do with running times: saying that "f is O(n^2)" is just a statement about the asymptotic behaviour of f(n) as n->∞, there is no mention of computation or algorithms or their running time. – ShreevatsaR Sep 08 '09 at 08:31
  • Okay, I stand corrected in that regard. In fact, I'm not really sure why I wrote what I did in that comment - I blame lazy Saturdays. ;) – Amber Sep 08 '09 at 08:43
1

The simplest approach that jumps out to me is a proof by induction.

For the first one, essentially you need to show that

sum (i=1 to n) i^2 < k*n^3, k > 2,n > 0

If we use the generalized principle of induction and take a base case of n=1 and k=2.

we get 1<2*1.

Now of course take the inductive hypothesis, then we know that

sum(i=1 to n) i^2<k *n^3, with a bit of fun math we get to

sum(i=1 to n) i^2+(n+1)^2 < k *n^3+(n+1)^2.

  • Now show k * n^3+(n+1)^2 < k *(n+1)^3

  • k *n^3+n^2+2n+1 < k *n^3+k *(3n^2+3n+1)

  • k *n^3 < k *n^3+(3k-1) *n^2+(3k-2) *n+k-1

Therefore, our original result is correct.

For the second proof you need to show that sum(i=1 to n) log_2(i) >= k*n*log(n), which I'll leave as an exercise for the reader ;).

The main step though is sum(i=1 to n) log_2(i)+log_2(n+1)>=k*n*log(n)+k*log(n+1), for some k, so clearly k is 1.

Jim
  • 22,354
  • 6
  • 52
  • 80
JSchlather
  • 1,564
  • 2
  • 13
  • 22
1
  • Σ (i=1 to n) i2 = n(n+1)(2n+1)/6, which is O(n3).

  • Note that (n!)2 = (1 n) (2(n-1)) (3(n-2))...((n-1)2) (n 1)
    = Π (i=1 to n) i (n+1-i)
    >= Π (i=1 to n) n
        [E.g., because for each i=1 to n, (i-1)(n-i) >= 0. Compare with Graham/Knuth/Patashnik, section 4.4]
    = nn.
    Thus, n! >= nn/2, and therefore
    Σ (i=1 to n) log i = log Π (i=1 to n) i = log n! >= log nn/2 = (n/2) log n, which is Ω(n log n).

Federico A. Ramponi
  • 46,145
  • 29
  • 109
  • 133
0

Probably, your CPU will multiply 32-bit integers in constant time. But big-Oh doesn't care about "less than four billion", so maybe you have to look at multiplication algorithms?

According to Wolfram, the "traditional" multiplication algorithm is O(n2). Although n in this case is the number of digits, and thus really log(n) in the actual number. So I should be able to square the integers 1..n in time O(n.log(n)). Summing is O(log(n2)), which is obviously O(log(n)), for an overall complexity of O(n.log(n)).

So I can quite understand your confusion.

John Fouhy
  • 41,203
  • 19
  • 62
  • 77
  • 1
    You are thinking of actual multiplication, which is too literal. I think what the question in the textbook is implying is that i^2 is actually the complexity of some operation. If that is the case, then O(n^3) actually makes sense. – mmcdole Sep 04 '09 at 05:20
  • 1
    The answer is fine but it's answering the wrong question. Hint: it's not a question about running time. – Jonathan Graehl Sep 04 '09 at 05:21
  • The question says nothing (nor asks) about running time. It's just a statement about functions: in the former case, the sum(i=1 to n)(i^2) is the function n(n+1)/2, which is O(n^2). Similarly in the second the sum is lg(n!), which is O(n lg n). – ShreevatsaR Sep 05 '09 at 03:09
  • The sum(i=1 to n)(i) is the function n(n+1)/2, the sum(i=1 to n)(i^2) has a more complicated closed form, but wolfram alpha makes everything simpler ;). I mean I could solve the generating function, but I digress the sum(i=1 to n)(i^2) is the function 1/6 * (2 * n^3 + 3 * n^2 + n) – JSchlather Sep 05 '09 at 15:40