0

I am trying to figure out the time complexity of the algorithm below.

def f()
    ans = 0
    for i = 1 to n:
        for j = 1 to log(i):
            ans += 1
    print(ans)

The link to the problem is provided below: https://discuss.codechef.com/t/multiple-choice-questions-related-to-testing-knowledge-about-time-and-space-complexity-of-a-program/17976 (Problem no.4)

2 Answers2

0

The answer is O(nlogn)

The outer loop has the complexity of O(n) while the inner one has the complexity of log(n)

Ran Turner
  • 14,906
  • 5
  • 47
  • 53
  • But the inner loop is linked to the outer one with the condition j<=log10(i). Now if count the number of times the loop is executed, let's say for n = 10, the ans will be 1 which points to log(n) – Sumant Kalra Dec 12 '21 at 07:52
  • It's not correct my friend. the outer loop runs from 1 to to 10, so it's an O(n) where n is 10. the inner loop runs to the log of the current index of the outer loop and it does it n times which means it runs to log(n). together is sums to O(n*logn) – Ran Turner Dec 12 '21 at 07:54
0

Adding to the other answer, you can frame the total run-time of the algorithm, T(n), as the number of times the inner loop runs for each i value from 1 to n. When i = 1 the inner loop runs log(1) times, when i = 2 the inner loop runs log(2) times, and so on. The total run-time is then represented by the sum:

T(n) = log(1) + log(2) + log(3) ... + log(n)

A sum of logs is a log of a product:

T(n) =  log(1*2*3...*n)
     <= log(n^n)         (n! <= n^n)
     =  nlog(n)          (log(a^b) = b*log(a))
T(n) <= nlogn

So the run-time is no worse than nlog(n): it's O(nlogn).

You could do a little more work to show that the O(nlogn) bound is tight, but this normally isn't necessary. (If you show that T(n) is upper bounded by nlogn, and that it's lower bounded by nlogn, you have shown a tight bound. That is, you've shown the exact complexity class, typically represented by "Big Theta" = Θ notation).

T(n) = log(1*2*3...*n)
     >= log(n/2*(n/2 + 1)*(n/2 + 2)...*n)     (throwing away first half of product)
     >= log(n/2^(n/2))                        (reducing all terms to n/2)
     =  n/2(log(n/2))
     =  n/2(logn - 1)                         (assuming base 2)             
T(n) >= n/2(logn - 1) = Θ(nlogn)

So as the algorithm's run-time is bounded above and below by Θ(nlogn) functions, its complexity class is exactly nlogn.

n/2(logn - 1) <= T(n) <= nlogn implies T(n) = Θ(nlogn) 
inordirection
  • 959
  • 6
  • 16
  • 1
    Thanks a lot for such a concise explanation. I was ignoring all the constant terms log(1), log(2), log(3) ... from the equation: T(n) = log(1) + log(2) + log(3) ... + log(n) , leaving only log(n). That was a mistake as I now realize that the count of all the constant terms depend on the input 'n' so they can't be ignored. – Sumant Kalra Dec 14 '21 at 06:35
  • @SumantKalra Yep, and besides the number of terms depending on n, the terms' values also aren't totally constant with respect to n. For instance, the sequence in reverse is log(n) + log(n-1) + log(n-2)...log(1), which gives off less of an impression that the terms are constant (the ith term is log(n-i)). This algorithm is nlogn for basically the same reasons that a classic double for-loop, with its 1 + 2 + 3... + n structure is n^2 rather than just n. – inordirection Dec 14 '21 at 07:00