Greedy basic steps:
1) determine the optimal sub-structure of the problem;
2) Design a recursive solution;
3) prove that at any stage of recursion, one of the optimal options is always greedy. Therefore, it is always safe to make greedy choices;
4) prove that there is only one sub-problem through greedy selection;
5) Design a recursive algorithm to implement greedy policies;
6) convert a recursive algorithm into an iterative algorithm.
More generally, you can use the following method to describe:
1) transform optimization problems into making choices first, and then solving the remaining sub-problems;
2) prove that there is always an optimal solution for the original problem to achieve greedy selection, thus proving that greedy choice is safe;
3) After greedy selection, the remaining sub-problems have the following nature: if we combine the optimal solution of the sub-Problem with the greedy choice we made earlier, an Optimal Solution of the original problem can be obtained.
Greedy choice: a global optimal solution can be achieved through local optimal (Greedy) selection.
In the greedy algorithm, what you do is always the best choice, and then solve the sub-problems that arise after the selection.
Optimal sub-structure: For a problem, if an optimal solution contains the optimal solution of its sub-problem, the problem is called an optimal sub-structure.
Difference between greedy and Dynamic Planning
Dynamic Programming and greedy algorithms are both recursive algorithms that use the local optimal solution to derive the global optimal solution.
Differences:
Dynamic Planning
The global optimal solution must include a certain local optimal solution, but not necessarily the previous local optimal solution. Therefore, we need to record all the previous optimal solutions.
Condition: optimal substructure; Overlapping subproblem.
Method: Construct a subproblem solution from the bottom up.
Example: the largest and most problematic subsequence, skiing Problems
Greedy Algorithm
Condition: the optimal solution of each step must depend on the optimal solution of the previous step.
Method: start from an initial solution of the problem and gradually approach the given goal to obtain a better solution as quickly as possible. When a certain step in an algorithm is reached, the algorithm stops.
For details, refer to the 0-1 backpack and some backpack problems.
Reference: Introduction to computing and Development (second edition) Machinery Industry Press