3

I have this algorithm:

function f(arr, i, n) {
  if (i == n - 1) {
    return arr[i];
  }
  if (i == 0) {
    return (arr[i] + f(arr, i + 1, n)) / n;
  } else {
    return arr[i] + f(arr, i + 1, n);
  }
}

It is a recursive algorithm, but it only calls once per lap.

I thought it could be an algorithm with an O (1 ^ n) notation, but if that's the case,

(1 ^ 1) = 1 (1 ^ 2) = 1 (1 ^ 3) = 1

etc

So wouldn't it be an O (1)? With a constant Big O notation?

ggorlen
  • 44,755
  • 7
  • 76
  • 106

2 Answers2

4

Before even thinking about time complexity, ask yourself: what does this algorithm even do? This question helps you build intuition for evaluating your analysis. In this case, we're taking the average of an array of numbers.

Secondly, what is O(1)? That means no matter how large the array is, the algorithm takes the same time to run. So your analysis is basically arguing that taking the average of an array with 15 elements in it takes the same amount of time as an array with 20 million elements in it. So something is wrong here.

This is O(n) because the algorithm visits each element once, T(n) = T(n - 1) + 1. There is 1 recursive call possible per frame, and it uses i + 1 to make the next recursive call, stepping through the array much like a loop would, and constant work done per call frame.

As an aside, this is a rather silly algorithm to write recursively because the problem space isn't broken down much per call. You can easily blow the stack if the array is large enough and there is a lot of overhead incurred from function calls (at least in language implementations without tail call optimization, which effectively turns recursion with no computation after the recursive calls into a loop). Not to mention, the code isn't any easier to understand than a loop. Prefer recursion for problems with logarithmic stack depth, not linear, unless the problem is very small and the recursion tree is wide (exponential, for example), which dominates.

ggorlen
  • 44,755
  • 7
  • 76
  • 106
  • 1
    Agreed: it seems like the code returns arr[0] + (the sum of arr[1] to arr[n] divided by n), so the recursion accesses the array n times, hence O(n). But like the answer above, this is a recursive solution to a linear-complexity task. So using recursion is highly overkill, when simple array calculation works just as well instead, if not better.. – wasiqam Feb 20 '21 at 01:53
1

Recursive functions usually always call once per lap but that doesn't mean they are O(1). Take this example:

function recursive(n) {
  return recursive(n + 1)
}

This would run infinitely. It should have O(∞), a complexity of O(1) simply wouldn't make sense because it would not be helpful at all.

For recursive functions you have to ask yourself, how long does it take for it to terminate, like with normal functions.

Another thing to consider is that after going one step down the recursion, the calling function may not be able to return yet, because it is waiting for the called function to come up with something. This only happens, when the called function arrived at the termination condition. Until then, all calling functions stay around. So another way to think of complexity in recursion is to think how many functions get need to get called, until the recusrion resolves.

  • Welcome to SO! The time complexity of infinite loops that don't depend on `n` is somewhat debatable and abstract--I'd suggest adding a base case to keep it in bounds of finite territory typically discussed in big-O. Secondly, "the calling function may not be able to return yet..." isn't a reliable way to reason about complexity. As my post points out, you can have a wide call tree that only keeps a few functions on the stack but does exponential work. [TCO](https://stackoverflow.com/questions/310974) can run an O(n) algo with 1 call ever. This remark speaks more to space complexity. – ggorlen Feb 20 '21 at 15:41