1

I was asked to use dynamic programming to solve a problem. I have mixed notes on what constitutes dynamic programming. I believe it requires a "bottom-up" approach, where smallest problems are solved first.

One thing I have contradicting information on, is whether something can be dynamic programming if the same subproblems are solved more than once, as is often the case in recursion.

For instance. For Fibonacci, I can have a recursive algorithm:

RecursiveFibonacci(n)
if (n=1 or n=2)
    return 1
else
    return RecursiveFibonacci(n-1) + RecursiveFibonacci(n-2)

In this situation, the same sub-problems may be solved over-and-over again. Does this render it is not dynamic programming? That is, if I wanted dynamic programming, would I have to avoid resolving subproblems, such as using an array of length n and storing the solution to each subproblem (the first indices of the array are 1, 1, 2, 3, 5, 8, 13, 21)?

Fibonacci(n)
F1 = 1
F2 = 1
for i=3 to n
    Fi=Fi-1 + Fi-2
return Fn

2 Answers2

3

Dynamic programs can usually be succinctly described with recursive formulas.

But if you implement them with simple recursive computer programs, these are often inefficient for exactly the reason you raise: the same computation is repeated. Fibonacci is a example of repeated computation, though it is not a dynamic program.

There are two approaches to avoiding the repetition.

  1. Memoization. The idea here is to cache the answer computed for each set of arguments to the recursive function and return the cached value when it exists.

  2. Bottom-up table. Here you "unwind" the recursion so that results at levels less than i are combined to the result at level i. This is usually depicted as filling in a table, where the levels are rows.

One of these methods is implied for any DP algorithm. If computations are repeated, the algorithm isn't a DP. So the answer to your question is "yes."

So an example... Let's try the problem of making change of c cents given you have coins with values v_1, v_2, ... v_n, using a minimum number of coins.

Let N(c) be the minimum number of coins needed to make c cents. Then one recursive formulation is

N(c) = 1 + min_{i = 1..n} N(c - v_i) 

The base cases are N(0)=0 and N(k)=inf for k<0.

To memoize this requires just a hash table mapping c to N(c).

In this case the "table" has only one dimension, which is easy to fill in. Say we have coins with values 1, 3, 5, then the N table starts with

  • N(0) = 0, the initial condition.
  • N(1) = 1 + min(N(1-1), N(1-3), N(1-5) = 1 + min(0, inf, inf) = 1
  • N(2) = 1 + min(N(2-1), N(2-3), N(2-5) = 1 + min(1, inf, inf) = 2
  • N(3) = 1 + min(N(3-1), N(3-3), N(3-5) = 1 + min(2, 0, inf) = 1

You get the idea. You can always compute N(c) from N(d), d < c in this manner.

In this case, you need only remember the last 5 values because that's the biggest coin value. Most DPs are similar. Only a few rows of the table are needed to get the next one.

The table is k-dimensional for k independent variables in the recursive expression.

Gene
  • 46,253
  • 4
  • 58
  • 96
  • Thanks Gene. So, to clarify the definition (that is what I am really after :o)), even if a recursive algorithm is used, and it *is* inefficient in that the same problems may be resolved over-and-over again, does it still count as dynamic programming, simply because larger problems are first broken down into smaller problems? –  Oct 03 '13 at 03:30
  • @user2808302 No. Not as I've ever seen the term used. The assumption underlying a dynamic program or DP algorithm is that sub-problems are solved exactly once to get the larger problem solution. When it's described with a recursive formula, memoization or bottom-up table construction is implied. – Gene Oct 03 '13 at 03:47
0

We think of a dynamic programming approach to a problem if it has

  1. overlapping subproblems
  2. optimal substructure

In very simple words we can say dynamic programming has two faces, they are top-down and bottom-up approaches.

In your case, it is a top-down approach if you are talking about the recursion. In the top-down approach, we will try to write a recursive solution or a brute-force solution and memoize the results so that we will try to use that result when a similar subproblem arrives, so it is brute-force + memoization. We can achieve that brute-force approach with a simple recursive relation.

Mehant Kammakomati
  • 852
  • 1
  • 8
  • 25