9

When I have a problem with optimal substructur and no subproblem shares subsubproblems then I can use a divide and conquer algorithm to solve it?

But when the subproblem shares subsubproblems (overlapping subproblems) then I can use dynamic programming to solve the problem?

Is this correct?

And how is greedy algorithms similar to dynamic programming?

Guest
  • 3,097
  • 5
  • 20
  • 16

2 Answers2

8

When I have a problem with optimal substructur and no subproblem shares subsubproblems then I can use a divide and conquer algorithm to solve it?

Yes, as long as you can find an optimal algorithm for each kind of subproblem.

But when the subproblem shares subsubproblems (overlapping subproblems) then I can use dynamic programming to solve the problem?

Is this correct?

Yes. Dynamic programming is basically a special case of the family of Divide & Conquer algorithms, where all subproblems are the same.

And how is greedy algorithms similar to dynamic programming?

They're different.
Dynamic programming gives you the optimal solution.
A Greedy algorithm usually give a good/fair solution in a small amount of time but it doesn't assure to reach the optimum.

It is, let's say, similar because it usually divides the solution construction in several stages in which it takes choices that are locally optimal. But if stages are not optimal substructures of the original problem, then normally it doesn't lead to the best solution.

EDIT:

As pointed out by @rrenaud, there are some greedy algorithms that have been proven to be optimal (e.g. Dijkstra, Kruskal, Prim etc.).
So, to be more correct, the main difference between greedy and dynamic programming is that the former is not exhaustive on the space of solutions while the latter is.
In fact greedy algorithms are short-sighted on that space, and each choice made during solution construction is never reconsidered.

digEmAll
  • 56,430
  • 9
  • 115
  • 140
  • 3
    Some greedy algorithms are optimal. Consider Dijkstra's shortest paths algorithm or the maximum subarray sum problem. Greedy algorithms tend not to branch on different possibilities, where as dynamic programming algorithms tend to branch for different possibly optimal choices, and then determine which of the choices was the best. – Rob Neuhaus May 28 '11 at 21:27
  • can you please explain _"Dynamic programming is basically a special case of the family of Divide & Conquer algorithms, where all subproblems are the same."_ I do not understand this. subproblems same means ? – saplingPro Nov 24 '12 at 04:58
  • @grassPro: what I mean is that dynamic programming requires a problem that is separable in subproblems of the same type because you use always the same routine solve the main problem and the subproblems recursively. – digEmAll Nov 24 '12 at 11:13
  • 1
    A rule of thumb I learned is that dynamic programming is ideal for problems that can be decomposed into "discrete, overlapping subsets". The overlapping part is what makes dynamic programming work. – Andrew Nov 23 '16 at 18:03
-2

Dynamic program uses bottom-up approach, saves the previous solution and refer it, this will allow us to make optimal solution among all available solutions, whereas greedy approach uses the top-down approach, so it takes an optimal solution from the locally available solution, will not take the previous level solutions which leads to the less optimized solution. Dynamic= bottom up, optimal solution Greedy= top down, less optimal, less time consuming