1

The code:

int temp = 0;
for (int h = 1; h <= n; h = h * 2)
{
   for (int i = 1; i <= n; i++)
   {
      for (int j = 1; j<=n; j++)
      {
         for (int k = 1; k<i; k++)
         {
            temp++
         }
      }
   }
}

According to my calculations, big-O is lg n * n^4. Kindly guide.

Edit: Thank you all for the replies. I understood where I was making the mistake. I really appreciate everyone who participated.

John Hascall
  • 9,176
  • 6
  • 48
  • 72
Faraz Mazhar
  • 79
  • 1
  • 1
  • 10

4 Answers4

3
for (int h = 1; h <= n; h = h * 2) // increment is pow(2). hence executed ln(n) times
{
   for (int i = 1; i <= n; i++) // increment is one. Executed n times
   {
      for (int j = 1; j<=n; j++)// increment is one. Executed n times
      {
         // increment is one. Executed i times. See below
         for (int k = 1; k<i; k++)
         {
            temp++
         }
      }
   }
}

The k loop is executed i times. The maximum value of i is n. so you can consider it as being executed n times always (over approximation)

complexity = ln(n) * n * n * n = ln*pow(n,3)

Another way to look at it... lets exchange i loop and j loop

for (int h = 1; h <= n; h = h * 2) // increment is pow(2). hence executed ln(n) times
    {
       for (int j = 1; j <= n; j++) // increment is one. Executed n times
       {
          // see below
          for (int i = 1; i<=n; i++)
          {
             for (int k = 1; k<i; k++)
             {
                temp++
             }
          }
       }
    }

now look at it.... the total number of executions of (i,k) is n*(n+1)/2. so now what is the complexity?

h * j * (i,k) = ln(n) * n * (n*(n+1)/2) == ln(n) * pow(n,3)

knightrider
  • 2,063
  • 1
  • 16
  • 29
  • But last loop isn't running till max value of i everytime. It's running for 1 + 2 + 3 + 4 + ... + n. Which can be replaced by summation i.e. n(n + 1)/2 – Faraz Mazhar Mar 15 '16 at 14:17
  • @FarazMazhar the easiest way to think about it is as I explained. Look at the maximum value of k. it is n. It can go never beyond n. That's why it is correct. I will explain what is wrong with using n(n+1)/2 in answer – knightrider Mar 15 '16 at 14:18
  • 1
    Yes, but the first `n` in `n(n + 1)/2` is the `n` times that the `j` loop runs. – John Hascall Mar 15 '16 at 14:19
  • So, we are ignoring (n + 1)/2 part? – Faraz Mazhar Mar 15 '16 at 14:21
  • @johnhascall the first `n` n tiles `i` runs not `j`. I was thinking how to explain in a simple manner. You did an awesome job :) – knightrider Mar 15 '16 at 14:21
  • @FarazMazhar O((n+1)/2) == O(n) – knightrider Mar 15 '16 at 14:22
  • @FarazMazhar one more way to think about it is... n(n+1)/2 is the *total* number of executions of k loop for all i's. If you are familiar with loop interchange I can give you another way to look at it. But if it is already clear, please accept the answer – knightrider Mar 15 '16 at 14:25
  • The way you explained i.e. "O((n+1)/2) == O(n)". Summation i to n is n (n + 1) / 2 and O((n + 1) / 2) == O(n) then n ( n + 1) / 2 == (n^2 + n) / 2. Hence, O(n (n+1)/2) == O(n^2). – Faraz Mazhar Mar 15 '16 at 14:26
  • @JohnHascall Thanks for your answer, sir. I am just trying to understand so I won't get confused in the future. – Faraz Mazhar Mar 15 '16 at 14:27
  • Lets assume that the outermost loop is `log(n)` and the 3rd loop is `n`. Now lets consider just the 2nd and 4th loops. It should be clear that when `i` is 1, the inner loop runs 0 times, when it is 2, it runs once. Adding all these up is 0+1+2+...+n-2, which as you point out is (about) `n(n-1)/2` -- BUT this is value counts BOTH loops. so the complete total is: log(n) + n + `n*n` (we can ignore the -1 and the /2) – John Hascall Mar 15 '16 at 14:34
2

Just for kicks, I ran this code with various values of n:

               n                  temp               O?
     -----------        --------------        ---------
               1                     0
              10                  1800        1.8 x n^3
             100               3465000        3.5 x n^3
            1000            4995000000        5.0 x n^3
           10000         6999300000000        7.0 x n^3  (and took a LONG time!)

conclusion: log(n) * n^3

John Hascall
  • 9,176
  • 6
  • 48
  • 72
1

Knightrider is basically right, but if you want correct numbers, you cant just say "I take the biggest value of i".

Yes, it is correct in terms f(n) = O(...), but you can also write f(n) = O(n^15) and this will be correct too.

The last loop is executed n-times, then n-1 times, then n-2 times etc., which is n+n-1+n-2+n-3....+3+2+1 which is n(n+1)/2. Now you can multiply it and get n(n+1)/2 = n^2/2 + n/2, constants can be ignored in asymptotic operations, which means n^2/2 + n/2 = Theta(n^2+n), which can be also transformed to n^2+n = Theta(n^2)

So after all, the result is not changed, BUT you have to be sure.

The final result is n^3*log_2(n) as described by knightrider

libik
  • 22,239
  • 9
  • 44
  • 87
  • Thank you for your reply, sir. – Faraz Mazhar Mar 15 '16 at 14:33
  • Just a note regarding the notation of the membership/inclusion of some arbitrary function `f` in some arbitrary set of functions `O(...)`: it isn't fully correct to state that `f(n) = O(...)`, since `O(...)` is the _set_ of all functions that satisfy the Big-O conditions for `...` (generally, there exists infinitely many such func:s for each such cond:s). Hence, we don't say that `f(n)` equals `O(...)`, but rather that `f` _is in_ `O(...)`, i.e., `f ∈ O(...)`. Naturally, any func. `f` that's a member of, say, `O(n^2)`, is a member also of `O(n^3)`, much as you point out in your answer above. – dfrib Mar 15 '16 at 17:24
1

Analysing the algorithm's complexity using Sigma notation

For the sake of completeness: when analyzing the time complexity of nested and co-dependent loops such as in your algorithm, Sigma notation can be a good tool

enter image description here

where ⌊x⌋ and ⌈x⌉ are the floor and ceiling functions.

From the above, it's apparent that an upper bound on the asymptotic behaviour of the algorithm is O(n^3 log_2(n)).


Using Sigma notation to estimate the actual number of iterations

Sigma notation analysis is, in addition to being a rigorous tool for Big-O(-Ω, -Θ) analysis, also useful if we're interested in counting or estimating the actual number of iterations of our algorithm.

We compare the estimated number of iterations—using the formula prior to the ≤ symbol above—with the actual number of iterations as presented in @JohnHascell:s answer.

// our formula (pseudo-code / matlab-compliant)
numIt(n) = n*ceil(log2(n))*(n^2-n)/2;

// comparison with actual count:
--------------------------------------------------------- 
           n     actual # of iter.   estimated # of iter. 
               (from @JohnHascell)   (from formula above)
------------   -------------------   --------------------
           1                     0                      0
          10                 1 800                  1 800
         100             3 465 000              3 465 000
        1000         4 995 000 000          4 995 000 000
       10000     6 999 300 000 000      6 999 300 000 000
--------------------------------------------------------- 

We see that the count from the formula conforms perfectly with the actual count; in this case, the estimation is in fact an actual count.

dfrib
  • 70,367
  • 12
  • 127
  • 192