44

In the book I am using Introduction to the Design & Analysis of Algorithms, dynamic programming is said to focus on the Principle of Optimality, "An optimal solution to any instance of an optimization problem is composed of optimal solutions to its subinstances".

Whereas, the greedy technique focuses on expanding partially constructed solutions until you arrive at a solution for a complete problem. It is then said, it must be "the best local choice among all feasible choices available on that step".

Since both involve local optimality, isn't one a subset of the other?

nbro
  • 15,395
  • 32
  • 113
  • 196
Irwin
  • 12,551
  • 11
  • 67
  • 97

7 Answers7

54

Dynamic programming is applicable to problems exhibiting the properties of:

  • overlapping subproblems, and
  • optimal substructure.

Optimal substructure means that you can greedily solve subproblems and combine the solutions to solve the larger problem. The difference between dynamic programming and greedy algorithms is that with dynamic programming, there are overlapping subproblems, and those subproblems are solved using memoization. "Memoization" is the technique whereby solutions to subproblems are used to solve other subproblems more quickly.

This answer has gotten some attention, so I'll give some examples.

Consider the problem "Making change with dollars, nickels, and pennies." This is a greedy problem. It exhibits optimal substructure because you can solve for the number of dollars. Then, solve for the number of nickels. Then the number of pennies. You can then combine the solutions to these subproblems efficiently. It does not really exhibit overlapping subproblems since solving each subproblem doesn't help much with the others (maybe a little bit).

Consider the problem "Fibonnaci numbers." It exhibits optimal substructure because you can solve F(10) from F(9) and F(8) efficiently (by addition). These subproblems overlap because they both share F(7). If you memoize the result of F(7) when you're solving F(8), you can solve F(9) more quickly.

In response to the comment about dynamic programming having to do with "reconsidering decisions": This is obviously not true for any linear dynamic programming algorithm like the maximum subarray problem or the Fibonacci problem above.

Essentially, imagine a problem having optimal substructure as a directed acyclic graph whose nodes represent subproblems (wherein the whole problem is represented by a node whose indegree is zero), and whose directed edges represent dependencies between subproblems. Then, a greedy problem is a tree (all nodes except the root have unit indegree). A dynamic programming problem has some nodes with indegree greater than one. This illustrates the overlapping subproblems.

Neil G
  • 32,138
  • 39
  • 156
  • 257
  • 1
    This is the best of the 4 answers here. Predictably, it is the only one with no votes... – j_random_hacker Dec 05 '12 at 09:49
  • 11
    It is also incorrect. "The difference between dynamic programming and greedy algorithms is that the subproblems overlap" is not true. Both dynamic programming and the greedy approach can be applied to the same problem (which may have overlapping subproblems); the difference is that the greedy approach does not reconsider its decisions, whereas dynamic programming will/may keep on refining choices. Source: http://en.wikipedia.org/wiki/Greedy_algorithm#Specifics – Mayank Sep 29 '14 at 04:28
  • What do you mean by `The difference between dynamic programming and greedy algorithms is that the subproblems overlap`? Are you sure you wanted to say this? – nbro May 24 '15 at 19:02
  • 1
    @Xenomorph: It means that there is shared work between subproblems that can be exploited by memoization. For example: let F be the fibonacci function. F(10) involves the subproblems F(9) and F(8). These subproblems overlap because they both share F(7). If you memoize the result of F(7) when you're solving F(8), you can solve F(9) more quickly. Also, did you try Google? http://en.wikipedia.org/wiki/Overlapping_subproblems – Neil G May 24 '15 at 19:11
  • I understand, but it is not the difference, it is the characteristic of what is a dynamic programming algorithm, no? What does it have to do with greedy algorithms? Your sentence is not correct. You are saying the difference between the 2 techniques is that the subproblems overlapp, and this does not make sense... – nbro May 24 '15 at 19:13
  • No, greedy algorithms have nothing to do with overlapping subproblems. Consider a greedy algorithm like "Making change with pennies and nickels and dollars". There are no overlapping subproblems. – Neil G May 24 '15 at 19:14
  • `The difference between dynamic programming and greedy algorithms is that the subproblems overlap`. This sentence is not well expressed. It should be `The difference between dynamic programming and greedy algorithms is that in dynamic programming algorithms the subproblems overlap` – nbro May 24 '15 at 19:19
  • Yes, that's what I meant, and I assume how most people read it. – Neil G May 24 '15 at 19:20
  • 1
    Well, not me, so I suppose that other people will have and have had the same problem... Explicit is better than implicit.. – nbro May 24 '15 at 19:21
  • 1
    @xenomorph I've updated the sentence. Is English your mother-tongue? English tends to be implicit compared with latinate languages. – Neil G May 24 '15 at 19:21
  • No, it is not, maybe this is the reason of our discussion. Anyway, I think that native speakers will still understand this version of your answer ;) – nbro May 24 '15 at 19:23
  • @NeilG Anyway, I think that your answer could still be improved (with more info) For example, look at this answer on math.exchange: http://math.stackexchange.com/questions/62282/do-dynamic-programming-and-greedy-algorithms-solve-the-same-type-of-problems – nbro May 24 '15 at 19:25
  • 1
    @xenomorph: I've added some examples and an illustration. – Neil G May 24 '15 at 19:38
  • @mayank: actually, dynamic programming has nothing to do with reconsidering decisions. I've updated my answer to reflect your misconception. – Neil G May 29 '15 at 00:02
  • @NeilG : I'm also having problems with that line you have in bold up there. Created this post http://stackoverflow.com/questions/32652971/dynamic-vs-greedy-algorithms-the-debate-regarding-neil-gs-answer-on-the-same – Somjit Sep 18 '15 at 13:18
  • @NeilG I don't think this is misconception. The difference between greedy and dynamic programming is not that in one the subproblems overlap but is that greedy solution performs makes decisions based on local information. Greedy typically does not perform a full search. Dynamic programming on the other hand is a way to perform a full search and thus it will end up finding the globally optimal solution – Ivaylo Strandjev Sep 18 '15 at 13:32
  • @IvayloStrandjev: Are you replying my assertion that it is a misconception that "reconsidering decisions" has anything to do with dynamic programming? – Neil G Sep 18 '15 at 13:36
  • 2
    @NeilG mostly about the `It is also incorrect. "The difference between dynamic programming and greedy algorithms is that the subproblems overlap" is not true.` This statement is correct and it is not a misconception. amit's answer stresses on the actual difference - greedy makes decisions based on **local** information. This has nothing to do with subproblems overlapping – Ivaylo Strandjev Sep 18 '15 at 13:39
  • @IvayloStrandjev: No, that statement is correct. You can verify from any reference that the two ingredients of dynamic programming are "optimal substructure" and "overlapping subproblems". Do you have a reference? According to amit's answer, Dijkstra's algorithm is not dynamic programming. Clearly Dijkstra's algorithm is dynamic programming. Also, his top-down/bottom-up idea is ridiculous since any bottom-up algorithm can be rewritten as top-down (with memoization) and remain a dynamic programming algorithm with the same runtime. – Neil G Sep 18 '15 at 13:42
  • 1
    @NeilG the fact that `dynamic programming are "optimal substructure" and "overlapping subproblems"` which is correct does not mean that this is the difference between greedy and dynamic programming. Dijkstra is special case where taking a greedy solution results in optimal solution, still the solution can be considered **both** greedy and dynamic. The two concepts are not mutually exclusive. Take a look at this question: https://www.quora.com/Is-Dijkstras-Algorithm-a-greedy-algorithm-or-a-dynamic-programming-algorithm – Ivaylo Strandjev Sep 18 '15 at 13:44
  • 1
    @IvayloStrandjev: No, Dijkstra's without the heap would be a greedy algorithm. The heap introduces a dependency whereby when a decision is made about a solution to a subproblem (the shortest path to a vertex) is used to facilitate the decisions to the other subproblems. This is why Dijkstra's is not simply a greedy algorithm (or else you would have to make the decisions independently). This agrees with the top Quora answer. – Neil G Sep 18 '15 at 13:54
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/90030/discussion-between-ivaylo-strandjev-and-neil-g). – Ivaylo Strandjev Sep 18 '15 at 13:55
  • But this is wrong! The difference between dynamic programming and greedy algorithm **is not** that there are overlapping sub-problems in dynamic programming. Consider the Fibonacci series example, a simple recursive implementation of Fibonacci calculation (without memorization) would be greedy despite the fact that it includes overlapping sub-problems. A dynamic programming algorithm for Fibonacci calculation would be one where **memorization was used to avoid solving overlapping sub-problems repeatedly**, and the solution was constructed **using a bottom-up approach**. – Saif Khan Jun 02 '16 at 18:45
  • @سیفخان Yes, all dynamic programming algorithms make use of memoization so that the overlapping subproblems are solved efficiently. I think that's clearly stated in this answer, but I've clarified it. However, the bottom-up approach is not necessary. You can write the solution top-down so long as you memoize as you go. – Neil G Jun 02 '16 at 20:12
12

The difference is that dynamic programming requires you to remember the answer for the smaller states, while a greedy algorithm is local in the sense that all the information needed is in the current state. Of course, there is some intersection.

bcurcio
  • 548
  • 2
  • 10
9

The key distinction is that greedy algorithms compose solutions "statically" in the sense that each local choice in the solution can be finalized without needing to know anything about the other local choices made. Dynamic algorithms, however, create sets of possible solutions to sub-problems and only generate a single solution to the global problem when all sub-problems have been considered. The Wikipedia page on greedy algorithms puts it well:

The choice made by a greedy algorithm may depend on choices made so far but not on future choices or all the solutions to the subproblem. It iteratively makes one greedy choice after another, reducing each given problem into a smaller one. In other words, a greedy algorithm never reconsiders its choices. This is the main difference from dynamic programming, which is exhaustive and is guaranteed to find the solution. After every stage, dynamic programming makes decisions based on all the decisions made in the previous stage, and may reconsider the previous stage's algorithmic path to solution.

Kyle Strand
  • 15,941
  • 8
  • 72
  • 167
6

DP algorithms uses the fact that (for some problems) - an optimal solution to a problem of size n is composed of an optimal solution to a problem of size n'<n, and uses this to build the solution bottom-up, from the smallest problem to the required size.

It fits the same principle of recursion very well (reduce the problem to a smaller sub-problem, and invoke recursively), and indeed - DP solutions are often represented as a recursive formula.

Greedy algorithms are looking at a local point and doing some choice with the data at this point. For some problems (shortest path without negatve weights for example) - this local choice will lead to an optimal solution.

A good example of the differences between the two approaches is for the shortest path problem:

  • Dijsktra's Algorithm is a greedy approach (At each step, chose the node that the path to it is currently minimized - the choice is done greedily based on the local state of the algorithm).
  • Bellman-Ford algorithm is a DP solution ("relax" ALL edges effectively reducing the problem)
amit
  • 175,853
  • 27
  • 231
  • 333
  • 3
    Dijkstra's algorithm is an example of dynamic programming even by your definition: the subproblem being solved is the distance from the root function applied to each node. There are even three references on the Wikipedia page you linked that suggest the same thing. – Neil G Dec 04 '12 at 23:57
  • 2
    -1 I'm afraid. Every bottom-up DP algorithm can be rewritten in memoised top-down form, and I suspect the reverse is also true. Memoised DP is just recursion in which you have noticed that some subproblems appear multiple times, so you save time by solving them only once each and remembering the answer. Greedy algorithms are just recursions in which you only consider one way of solving each subproblem instead of all the possible ways, either because you can prove you don't need to, or because you're only interested in a "good enough" heuristic solution anyway. – j_random_hacker Dec 05 '12 at 10:00
  • So @j_random_hacker what are you saying, it's all just recursive techniques? That's a bit more general than I was going for. – Irwin Dec 05 '12 at 21:53
  • 1
    @Irwin: Maybe that was confusing, sorry. Iteration can often be used instead for either one, and may be simpler and faster (or not), but both can (like all kinds of repetition) be performed using recursion, and that is sometimes the clearest way to think about them. If you do write a recursive solution of each kind, the clearest difference will be that the DP one will encounter the same subproblem more than once. – j_random_hacker Dec 06 '12 at 04:01
  • @Irwin: If you read Python, then you might be interested in this [memoize decorator](http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize) that turns many recursive solutions into their dynamic programming equivalent. – Neil G Dec 06 '12 at 15:06
  • 1
    @j_random_hacker what does your comment add to invalidate amit's answer. It is correct and it stresses on the most important part of a greedy algorithm - that it makes a choice based on **local** information. The difference stressed by the currently accepted answer has nothing to do with that and is not correct – Ivaylo Strandjev Sep 18 '15 at 13:37
1

Greedy method:

  1. Greedy method focuses on expanding partially constructed solutions.
  2. It provides many results such as feasible solution.
  3. More efficient

Dynamic programming:

  1. Focuses on principle of optimality.
  2. It provides specific answers.
  3. Less efficient
Paulo Tomé
  • 1,910
  • 3
  • 18
  • 27
0

The difference between DP and greedy is DP will look for the global optimal at each subproblem, but greedy will only look for the local optimal. So, this about this scenario:

Suppose you are climbing a mountain, and you want to climb as high as possible. The road on the mountain has several branches, and at each intersection you need to decide which branch to take, which is the subproblem of this climbing problem (the goal is the same, only the start point is different)

For the greedy algorithm, you will always choose whichever one seems more steeply up. This is a local optimal decision and is not guaranteed to lead to the best result

For the DP, at each intersection, you should already know the highest altitude each branch will lead you to (suppose your evaluation order is reversed, a.k.a from end points to the starting point), and choose the one with the largest altitude. This decision is build upon the global optimum of the future subproblems and there for will be globally optimum for this subproblem

NoSegfault
  • 675
  • 1
  • 9
  • 14
-1

The concepts of greedy and dynamic solutions are not mutually exclusive and I think this is causing a lot of confusion in most answers. I believe amit's answer stresses on the most important property: a greedy solution makes decisions based on local information. As a consequence a greedy solution may end up finding a local optimum instead of a global one. Dynamic solutions break a problem into smaller subproblems and then aggregate the result to obtain the answer for a bigger more complex problem. So - is it possible that a problem is both dynamic and greedy? The answer is - yes it is possible. An example would be Dijkstra's algorithm. For this algorithm you make a greedy choice on each step and yet you reduce the problem to a simpler subproblem.

Still there are also examples of greedy algorithms that are not DP-s: say hill climbing is a greedy algorithm that does not break a problem into multiple subproblems - it only always solves one. There are also examples of DPs that are not greedy - e.g. computing the n-th Fibonacci number using memoization is not greedy.

Ivaylo Strandjev
  • 69,226
  • 18
  • 123
  • 176
  • You're wrong about Dijkstra's and I explained why in the comments: Dijkstra's without the heap would be a greedy algorithm. The heap introduces a dependency whereby when a decision is made about a solution to a subproblem (the shortest path to a vertex) is used to facilitate the decisions to the other subproblems. This is why Dijkstra's is not simply a greedy algorithm (or else you would have to make the decisions independently). This agrees with the top Quora answer you linked. – Neil G Sep 18 '15 at 13:56
  • 1
    This is not correct (and I suspect you downvoted correct answers due to your misconception). For future readers: please see [the chat with Neil G](http://chat.stackoverflow.com/rooms/90030/discussion-between-ivaylo-strandjev-and-neil-g). – Kyle Strand Sep 18 '15 at 16:23
  • 1
    I did not downvote anyone. If you say this is not correct please elaborate. I don't think the chat proves me wrong but we've also not agreed who is right. – Ivaylo Strandjev Sep 18 '15 at 16:25
  • I made my contribution in that chat. Since you were not convinced by @NeilG's arguments, I don't expect you to find my comment there conclusive, but I do future readers to be able to see the transcript. – Kyle Strand Sep 18 '15 at 16:31
  • 1
    I will read your comments when I can. You are free to write in his support in the meantime. Still I am convinced my answer is correct – Ivaylo Strandjev Sep 18 '15 at 16:33
  • Okay. I won't leap to conclusions about downvotes :) (Not that I think it would be wrong of you to downvote me, since presumably you do disagree with my answer.) – Kyle Strand Sep 18 '15 at 16:39
  • 1
    As per [this answer](http://meta.stackoverflow.com/a/306367/1858225), here's a link to the transcript of the above chat instead of the chat room itself: http://chat.stackoverflow.com/transcript/90030 – Kyle Strand Sep 18 '15 at 19:59