Let function f() be:
void f(int n)
{
for (int i=1; i<=n; i++)
for (int j=1; j<=n*n/i; j+=i)
printf(“*”);
}
According to my calculations, the run time in Big O method should be O(n2log n).
The answer is O(n2). Why is that?
Let function f() be:
void f(int n)
{
for (int i=1; i<=n; i++)
for (int j=1; j<=n*n/i; j+=i)
printf(“*”);
}
According to my calculations, the run time in Big O method should be O(n2log n).
The answer is O(n2). Why is that?
I owe you an apology. I misread your code the first time around, so the initial answer I gave was incorrect. Here's a corrected answer, along with a comparison with the original answer that explains where my analysis went wrong. I hope you find this interesting - I think there's some really cool math that arises from this!
The code you've posted is shown here:
for (int i=1; i<=n; i++)
for (int j=1; j<=n*n/i; j+=i)
printf(“*”);
To determine the runtime of this code, let's look at how much work the inner loop does across all iterations. When i = 1, the loop counts up to n2 by ones, so it does n2 work. When i = 2, the loop counts up to n2 / 2 by twos, so it does n2 / 4 work. When i = 3, the loop counts up to n2 / 3 by threes, so it does n2 / 9 work. More generally, the kth iteration does n2 / k2 work, since it counts up to n2 / k with steps of size k.
If we sum up the work done here for i ranging from 1 to n, inclusive, we see that the runtime is
n2 + n2 / 4 + n2 / 9 + n2 / 16 + ... + n2 / n2
= n2 (1 + 1/4 + 1/9 + 1/16 + 1/25 + ... + 1/n2).
The summation here (1 + 1/4 + 1/9 + 1/16 + ...) has the (surprising!) property that, in the limit, it's exactly equal to π2 / 6. In other words, the runtime of your code asymptotically approaches n2 π / 6, so the runtime is O(n2). You can see this by writing a program that compares the number of actual steps against n2 π / 6 and looking at the results.
I got this wrong the first time around because I misread your code as though it were written as
for (int i=1; i<=n; i++)
for (int j=1; j<=n*n/i; j+=1)
printf(“*”);
In other words, I thought that the inner loop took steps of size one on each iteration rather than steps of size i. In that case, the work done by the kth iteration of the loop is n2 / k, rather than n2 / k2, which gives a runtime of
n2 + n2/2 + n2/3 + n2/4 + ...n2/n
= n2(1 + 1/2 + 1/3 + 1/4 + ... + 1/n)
Here, we can use the fact that 1 + 1/2 + 1/3 + ... + 1/n is a well-known summation. The nth harmonic number is defined as Hn = 1 + 1/2 + 1/3 + ... + 1/n and it's known that the harmonic numbers obey Hn = Θ(log n), so this version of the code runs in time O(n2 log n). It's interesting how this change so dramatically changes the runtime of the code!
As an interesting generalization, let's suppose that you change the inner loop so that the step size is iε for some ε > 0 (and assuming you round up). In that case, the number of iterations on the kth time through the inner loop will be n2 / k1 + ε, since the upper bound on the loop is n2 / k and you're taking steps of size kε. Via a similar analysis to what we've seen before, the runtime will be
n2 + n2 / 21+ε + n2 / 31+ε + n2 / 31+ε + ... + n2 / n1+ε
= n2(1 + 1/21+ε + 1/31+ε + 1/41+ε + ... + 1/n1+ε)
If you've taken a calculus course, you might recognize that the series
1 + 1/21+ε + 1/31+ε + 1/41+ε + ... + 1/n1+ε
converges to some fixed limit for any ε > 0, meaning that if the step size is any positive power of i, the overall runtime will be O(n2). This means that all of the following pieces of code have runtime O(n2):
for (int i=1; i<=n; i++)
for (int j=1; j<=n*n/i; j+=i)
printf(“*”);
for (int i=1; i<=n; i++)
for (int j=1; j<=n*n/i; j+=i*i)
printf(“*”);
for (int i=1; i<=n; i++)
for (int j=1; j<=n*n/i; j+=i*(sqrt(i) + 1))
printf(“*”);
Run time for first loop is n
and Run time for second loop is (n/i)^2
(not n^2/i
) because we have j+=i
(not j++
). So total time is as follow:
∑{i=1to n}(n/i)^2 = n^2∑{i=1to n}(1/i)^2 < 2*n^2
So time complexity is
O(n^2)
From what I've learned from theory, is that the i
does not affect the complexity very much. Since you have an exponential function, the log n would be neglected. Therefore, it would be considered only the big O(n2) instead of the expected O(n2log n).
Recall that when we use big-O notation, we drop constants and low-order terms. This is because when the problem size gets sufficiently large, those terms don't matter. However, this means that two algorithms can have the same big-O time complexity, even though one is always faster than the other. For example, suppose algorithm 1 requires N2 time, and algorithm 2 requires 10 * N2 + N time. For both algorithms, the time is O(N2), but algorithm 1 will always be faster than algorithm 2. In this case, the constants and low-order terms do matter in terms of which algorithm is actually faster.
However, it is important to note that constants do not matter in terms of the question of how an algorithm "scales" (i.e., how does the algorithm's time change when the problem size doubles). Although an algorithm that requires N2 time will always be faster than an algorithm that requires 10*N2 time, for both algorithms, if the problem size doubles, the actual time will quadruple.
When two algorithms have different big-O time complexity, the constants and low-order terms only matter when the problem size is small. For example, even if there are large constants involved, a linear-time algorithm will always eventually be faster than a quadratic-time algorithm. This is illustrated in the following table, which shows the value of 100*N (a time that is linear in N) and the value of N2/100 (a time that is quadratic in N) for some values of N. For values of N less than 104, the quadratic time is smaller than the linear time. However, for all values of N greater than 104, the linear time is smaller.
Take a look at this article for more details.