70

Related question:
Dynamic programming and memoization: top-down vs bottom-up approaches

I have gone through a lot of articles on this but can't seem to make sense of it. At times recursion and dynamic programming looks the same and at others memoization & dynamic programming look alike. Can someone explain to me what's the difference?

P.S. It will also be helpful if you could point me to some code using the three approaches on the same problem. (e.g. the Fibonacci series problem, I think every article I read used recursion but referred to it as dynamic programming)

Catskul
  • 17,916
  • 15
  • 84
  • 113
Sakibul Alam
  • 1,731
  • 2
  • 21
  • 40
  • 6
    What _isn't_ the difference? :) – cheeken Aug 26 '12 at 20:45
  • 3
    Regarding recursion, check [this](http://stackoverflow.com/questions/12133754/whats-the-difference-between-recursion-memoization-dynamic-programming) question. – Anne Aug 26 '12 at 20:48
  • First try to understand what recursion is. After a while you'll understand dynamic programming too. – Rsh Aug 26 '12 at 21:59
  • 3
    what's the duplicate question? can someone link to it? that should come with the "marked as duplicate" flag. – OKGimmeMoney May 09 '15 at 04:16

5 Answers5

80

Consider calculating the fibonacci sequence:

Pure recursion:

int fib(int x)
{
    if (x < 2)
        return 1;

    return fib(x-1) + fib(x-2);
}

results in exponential number of calls.

Recursion with memoization/DP:

int fib(int x)
{
    static vector<int> cache(N, -1);

    int& result = cache[x];

    if (result == -1)
    {
        if (x < 2)
            result = 1;
        else
            result = fib(x-1) + fib(x-2);
    }

    return result;
}

Now we have linear number of calls the first time, and constant thereafter.

The above method is called "lazy". We calculate the earlier terms the first time they are asked for.

The following would also be considered DP, but without recursion:

int fibresult[N];

void setup_fib()
{
    fibresult[0] = 1;
    fibresult[1] = 1;
    for (int i = 2; i < N; i++)
       fibresult[i] = fibresult[i-1] + fibresult[i-2];
}

int fib(int x) { return fibresult[x]; }

This way may be described as "eager", "precaching" or "iterative". Its faster overall but we have to manually figure out the order the subproblems need to be calculated in. This is easy for fibonacci, but for more complex DP problems it gets harder, and so we fall back to the lazy recursive method if it is fast enough.

Also the following is neither recursion nor DP:

int fib(int x)
{
    int a = 1;
    int b = 1;
    for (int i = 2; i < x; i++)
    {
        a = a + b;
        swap(a,b);
    }
    return b;
}

It uses constant space and linear time.

Also I will mention for the sake of completeness there is a closed form for fibonacci that uses nether recursion nor DP that allows us to calculate in constant time the fibonacci term using a mathematic formula based on the golden ratio:

http://www.dreamincode.net/forums/topic/115550-fibonacci-closed-form/

Andrew Tomazos
  • 66,139
  • 40
  • 186
  • 319
  • 11
    Your last example **is** DP, you just reduce the memory. The algorithm is the same as in your previous two examples. – IVlad Aug 26 '12 at 23:04
  • thanks, for the examples. Are pure recursion not DP until use memoization? And the last example is momoization approach, right? – Sakibul Alam Aug 27 '12 at 04:13
  • 3
    Most people would not consider the last coded answer DP, they would call it a simple iterative solution. Notably it doesn't keep track of the mapping between the term number and the answer for that term. In the end there isn't a definitive test that can say yes or no something is DP or not. A pure recursion that doesn't cache the answers (which is all that memoization means) is generally not considered DP. – Andrew Tomazos Aug 27 '12 at 14:13
  • @AndrewTomazos cold you please explain why the second example is DP? I get that it is recursion, but not getting why its DP. – user13107 Mar 04 '14 at 06:30
  • 1
    @user13107: It memoizes the answers in a cache so if two calls are made `f(3)` and then later `f(3)` again, only the first does the computation, the second call gets the cached result from the first. This is generally considered a form of DP. – Andrew Tomazos Mar 04 '14 at 19:29
  • Just because there's a closed form solution does not mean it takes constant time. – Vlad the Impala Mar 26 '15 at 19:46
  • In Python: https://medium.com/@mimoralea/recursion-memoization-and-dynamic-programming-2083a1e1308e#.ro8z9e9kx – mimoralea Feb 02 '17 at 23:57
  • The last example is still DP, but only stores the current 'wavefront' (which coincidentally is just a single memory cell). Which is just a fancy way of detecting that former cells will not be referenced anymore and can thus be 'garbage-collected' or reused. – Sebastian Graf Nov 26 '17 at 22:44
  • That's equivalent to [Hirschberg's Algorithm](https://en.wikipedia.org/wiki/Hirschberg%27s_algorithm) computing the [Levenshtein Distance](https://en.wikipedia.org/wiki/Levenshtein_distance) in linear space, although the full DP matrix has `O(nm)` entries. – Sebastian Graf Nov 26 '17 at 22:47
  • Nit-picking. Your code under *Recursion with memoization/DP* should be *int fib()* instead of *void fib()* – Seshadri R Sep 11 '18 at 06:02
  • great example. Nice – Adil Saju Sep 18 '18 at 07:49
  • It's not a constant time, because power is not a constant time really. – Nathan B Mar 20 '22 at 20:58
  • @NathanB: It's complicated, precise definitions of asymptotic behaviour don't work well with non-ideal fixed precision numeric types that appear in real finite computers. The closed form algorithm for fibonacci is generally casually refered to as constant time. It's certainly the fastest know solution in any case. – Andrew Tomazos Mar 21 '22 at 04:13
45

Well, recursion+memoization is precisely a specific "flavor" of dynamic programming: dynamic programming in accordance with top-down approach.

More precisely, there's no requrement to use recursion specifically. Any divide & conquer solution combined with memoization is top-down dynamic programming. (Recursion is LIFO flavor of divide & conquer, while you can also use FIFO divide & conquer or any other kind of divide & conquer).

So it is more correct to say that

divide & conquer + memoization == top-down dynamic programming

Also, from a very formal point of view, if you implement a divide & conquer solution for a problem that does not generate repetitive partial solutions (meaning that there's no benefit in memoization), then you can claim that this divide & conquer solution is a degenerate example of "dynamic programming".

However, dynamic programming is a more general concept. Dynamic programming can use bottom-up approach, which is different from divide & conquer+memoization.

AnT stands with Russia
  • 312,472
  • 42
  • 525
  • 765
  • 2
    The bottom-up approach computes *the whole* matrix, whether the results are actually needed or not, whereas the top-down approach is more like lazy evaluation: Results are computed only when demanded, but most of the time the bookkeeping associated with this kind of caching is outperformed by the access patterns of and the ability to properly parallelize array-based code. – Sebastian Graf Nov 26 '17 at 22:53
23

I'm sure you can find detailed definition over internet. Here is my attempt to simplify things.

Recursion is calling itself again.

Dynamic Programming is a way to solve problems which exhibit a specific structure (optimal sub structure) where a problem can be broken down into sub problems which are similar to original problem. Clearly one can invoke recursion to solve a DP. But it is not necessary. One can solve a DP without recursion.

Memoization is a way to optimize DP algorithms which rely on recursion. The point is not to solve the sub problem again which has been already solved. You can view it as cache of solutions to sub problems.

Ankush
  • 2,454
  • 2
  • 21
  • 27
  • So, what i understand is recursion and memoization are used to solve DP problems but they are totally separate things. And divide and conquer problems differ from DP in that the sub-problems do not overlap. – Sakibul Alam Aug 27 '12 at 04:09
  • @Shuvo Yes. DP is a type of Divide and Conquer. You are right that in DP we end up having overlapping sub problems. We exploit this fact and save time by storing sub solutions in tables. – Ankush Aug 27 '12 at 07:31
9

They are different concepts. They overlap pretty often, but they are different.

Recursion happens whenever a function calls itself, directly or indirectly. This is all.

Example:

a -> call a
a -> call b, b -> call c, c -> call a

Dynamic programming is when you use solutions to smaller subproblems in order to solve a larger problem. This is easiest to implement recursively because you usually think of such solutions in terms of a recursive function. An iterative implementation is usually preferred though, because it takes less time and memory.

Memoization is used to prevent recursive DP implementation from taking a lot more time than needed. Most times, a DP algorithm will use the same subproblem in solving multiple large problems. In a recursive implementation, this means we will recompute the same thing multiple times. Memoization implies saving the results of these subproblems into a table. When entering a recursive call, we check if its result exists in the table: if yes, we return it, if not, we compute it, save it in the table, and then return it.

IVlad
  • 43,099
  • 13
  • 111
  • 179
-6

Recursion has absolutely nothing to do with memoization and dynamic programming; it is a completely separate concept.

Otherwise, this is a duplicate question of: Dynamic programming and memoization: bottom-up vs top-down approaches

Community
  • 1
  • 1
ninjagecko
  • 88,546
  • 24
  • 137
  • 145
  • 6
    DP (almost?) always involves recursion and memoization, so saying they have *nothing* to do with each other is a incorrect. – BlueRaja - Danny Pflughoeft Aug 26 '12 at 21:10
  • @BlueRaja-DannyPflughoeft: You're misinterpreting what I say: this is why I said "..they are completely separate concepts." as clarification. You can always turn a recursive algorithm into an iterative algorithm. DP and memoization take advantage of [optimal substructure](http://en.wikipedia.org/wiki/Optimal_substructure); that doesn't make them per se recursive; recursion is just one way to view the exploitation of optimal substructure. The fact that pigeons tend to perch on buildings does not make pigeons a related concept to buildings, unless you're arguing that, in which case that's fine. – ninjagecko Aug 26 '12 at 21:21