1

The function:

for (int i = 0; i < n; i++) {
    for (int j = i; j > 0; j /= 2){
        std::cout << j << endl;
    }
}

I've just been introduced to this stuff and this problem is tripping me up. Since the inner for loop is connected to i, it appears it would run log(n!) times. That is, since log(a) + log(b) = log(a*b). And the outer loop runs n times. I'm still messing up my answer though and am not sure how exactly to connect everything / how else I could go about this. Any help?

  • 4
    Note that [O(log n!) is just O(n log n)](https://stackoverflow.com/questions/8118221/what-is-ologn-and-on-and-stirling-approximation), which should make your answer make more sense, since the outer loop runs `n` times and the inner loop runs `log` times (not log of n, but still). – Bernhard Barker Aug 28 '18 at 21:13

2 Answers2

0

To calculate the times the inner loop run start with this: the loop stop when j=0 and j start =i and every time half iself, so at the iteration k you have i/2^k, when i/2^k=1 it reach the last iteration so we take the log2 and the result is k=log2(i)+1, plus one because it takes another iteration. So the inner loop take k=log2(i) +1 iteration. Now you can express the number of time the two loop run as : Sum for i=1 to i=N of Sum for j=1 to j=\log2(i)+1\ of 1.

However if you are interested in T(n) simply multiply the number of times of the outer loop for the maximum time of the inner loop, so the maximum of the inner loop is log2(n) and so you get : T(n)=O(n log2(n))

Where \log2(i)+1\ is the lower integer

Andrea Bellizzi
  • 497
  • 5
  • 14
0

Sure, your approach seems fine.

There is an alternative way of looking at the question.

for (int i = 0; i < n; i++) { // loop 1
    for (int j = i; j > 0; j /= 2){ // loop 2
        std::cout << j << endl;
    }
}

Analysis of Loop 1

Nothing to see here, straightforward O(n)

Analysis of Loop 2

A little more interesting, given a number j(=i) it is a race to see how many times you can divide by 2 before you get to 0. Which is of course nothing but the log base 2 of the number.

Combining both analyses for each n you perform a log(n,2) -> log(number, base).

i.e O(n * log(n,2)).

by something called Stirling's Approximation you can prove that O(nlogn) is O(log(n!)).

Now something that needs to be pointed out is, you have asked for T(n) which is slightly different O(n).

T(n) is something that you use to mathemtaically denote the algorithm relative to it's next processing steps and the current step.

For Something like merge sort

T(n) = 2 * T(n/2)(divide) + O(n)(conquer).

In our case I would write T(n) as

T(n) = T(n + 1)(next step) + O(logn)(race to zero)   , 0 <= n < N
     = 0                                             , n >= N

(I'm aware T(n) is typically denoted in terms of previous terms and not future terms but I am just outlining a point, it isn't a rigorous mathematical definition)

Srini
  • 1,619
  • 1
  • 19
  • 34