A Greedy algorithm is an algorithmic paradigm that builds up a solution piece by piece, always choosing the next piece that offers the most obvious and immediate benefit. So the problems where choosing locally optimal also leads to a global solution are best fit for Greedy.
For example, consider the Fractional Knapsack Problem. The local optimal strategy is to choose the item that has maximum value vs weight ratio. This strategy also leads to global optimal solution because we allowed taking fractions of an item.
Dynamic programming is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for the same inputs, we can optimize it using Dynamic Programming. The idea is to simply store the results of subproblems so that we do not have to re-compute them when needed later. This simple optimization reduces time complexities from exponential to polynomial. For example, if we write a simple recursive solution for Fibonacci Numbers, we get exponential time complexity and if we optimize it by storing solutions of subproblems, time complexity reduces to linear.
Below are some major differences between Greedy method and Dynamic programming:
|Feature||Greedy method||Dynamic programming|
|Feasibility||In a greedy Algorithm, we make whatever choice seems best at the moment in the hope that it will lead to global optimal solution.||In Dynamic Programming we make decision at each step considering current problem and solution to previously solved sub problem to calculate optimal solution .|
|Optimality||In Greedy Method, sometimes there is no such guarantee of getting Optimal Solution.||It is guaranteed that Dynamic Programming will generate an optimal solution as it generally considers all possible cases and then choose the best.|
|Recursion||A greedy method follows the problem solving heuristic of making the locally optimal choice at each stage.||A Dynamic programming is an algorithmic technique which is usually based on a recurrent formula that uses some previously calculated states.|
|Memorization||It is more efficient in terms of memory as it never look back or revise previous choices||It requires dp table for memorization and it increases it’s memory complexity.|
|Time complexity||Greedy methods are generally faster. For example, Dijkstra’s shortest path algorithm takes O(ELogV + VLogV) time.||Dynamic Programming is generally slower. For example, Bellman Ford algorithm takes O(VE) time.|
|Fashion||The greedy method computes its solution by making its choices in a serial forward fashion, never looking back or revising previous choices.||Dynamic programming computes its solution bottom up or top down by synthesizing them from smaller optimal sub solutions.|
|Example||Fractional knapsack .||0/1 knapsack problem|
- Longest subsequence with a given OR value : Dynamic Programming Approach
- Coin game of two corners (Greedy Approach)
- Bitmasking and Dynamic Programming | Set-2 (TSP)
- Dynamic Programming on Trees | Set 2
- Dynamic Programming on Trees | Set-1
- How to solve a Dynamic Programming Problem ?
- Dynamic Programming | Building Bridges
- Top 20 Dynamic Programming Interview Questions
- Number of Unique BST with a given key | Dynamic Programming
- Dynamic Programming vs Divide-and-Conquer
- Double Knapsack | Dynamic Programming
- Convert N to M with given operations using dynamic programming
- Compute nCr % p | Set 1 (Introduction and Dynamic Programming Solution)
- Optimal Substructure Property in Dynamic Programming | DP-2
- Overlapping Subproblems Property in Dynamic Programming | DP-1
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.
Improved By : Shivam_k