5

I am reading Cracking the Code Interview 6th Edition and have a question about something on page 45.

There is an example algorithm like this:

int f(int n){
    if (n <= 1)
        return 1;
    return f(n - 1) + f(n - 1);
}

For the algorithm it gives the following comment:

The space complexity of this algorithm will be O(N). Although we have O(2^N) nodes in the tree total, only O(N) exist at any given time. Therefore, we would only need to have O(N) memory available.

I don't really understand why only O(N) exist at any given time. Shouldn't all of them be in the call stack? Can somebody explain this?

Travis J
  • 81,153
  • 41
  • 202
  • 273
Kou Chibin
  • 73
  • 8
  • @TimBiegeleisen In several programming languages `**` is exponentiation because `^` already got used for `xor`. So that is `O(2^N)` in notation that you may be more familiar with. – btilly Oct 31 '18 at 21:36
  • Related or duplication of https://stackoverflow.com/q/33590205/6064933 – jdhao Jun 19 '22 at 14:15

2 Answers2

6

It looks like an exponential space complexity O(2^n) because in order to complete f() we need to two recursions:

#4: f(4)
#3: (f(3) + f(3))
#2: (f(2) + f(2)) + (f(2) + f(2))
#1: ((f(1) + f(1)) + (f(1) + f(1))) + ((f(1) + f(1)) + (f(1) + f(1)))

As we can see, the number of recursions grows exponentially, so the space complexity looks like O(2^n).

On the other hand, we will not call both functions at the same time. In fact, we will complete the first recursive call, get the value and then complete the second recursive call:

#4: f(4)
#3: f(3)
#2: f(2)
#1: f(1)

#4: f(4)
#3: f(3)
#2: f(2)
#1: (1 + f(1))

#4: f(4)
#3: f(3)
#2: f(2)
#1: (1 + 1) = 2

#4: f(4)
#3: f(3)
#2: (2 + f(2))
...

So at any given time, we need only O(n) space + O(n) for the temporary values.

Hence, this function has O(n) space complexity and O(2^n) computational complexity, i.e. recursions.

I guess that's what the author meant.

Andriy Berestovskyy
  • 8,059
  • 3
  • 17
  • 33
5

A better way to understand this might be to draw the call tree, instead of the call stack.

The call tree represents all calls that get made over the lifetime of the function. Below f(n) there will be two branches. each one there are the function calls that you make

Below a call to f(n) there are two calls to calculate f(n-1). Below each of those there are 2 more of f(n-2). And so on.

If inside of call individually you need a fixed amount of memory and fixed amount of work (with more time and work spent in subcalls), then the size of this tree represents the total amount of work that you have to do to run the program. That is going to be 1 + 2 + 4 + ... + 2**n = (1 - 2**(n+1))/(1-2) = O(2**n).

However the largest amount of memory that you can need at any one given time is the depth of the tree. Because as soon as you return from a call, you're done with it and throw away the memory required. The maximum depth of the tree is n, and is reached every time you get to a call to calculate f(1). So you get to allocate, memory, calculate something, throw it away, then it is available when you need to allocate it again. Over and over again.

Try drawing the picture for n=3 and then walking through the computation and you'll see the point. As you progress you are both allocating and freeing memory. As a result you can reuse the same memory over and over again rather than having to use a very large amount of memory.

btilly
  • 43,296
  • 3
  • 59
  • 88