This lesson focuses on: Strategy Evaluation (policy Evaluation) policy iteration (iteration) value iteration (value iteration) dynamic planning (DP, dynamic Programming) extended compression mappings
Dynamic programming is a method used to solve complex problems, which break the problem into many sub-problems, solve them one by one, and then merge them. These complex problems usually have two properties:
1. The optimal solution can be decomposed into sub-problems
2. These sub-problems can be repeated and can be cached and reused multiple times
MDP satisfies these two properties, the Bellman equation gives the iterative decomposition, the value function storage and the reuse solution. Dynamic planning can be used for planning in MDP. (Lecture01 has different references to planning and reinforcement learning.) This lesson is mainly about planning, it has nothing to do with enhancing learning. )
Given a policy π\pi, use an iterative approach to evaluate this strategy and choose actions based on the values of VΠV_\PI:
vπ (s) =e[rt+1+γrt+2+...| St=s] V_\pi (s) =\sf{e}[r_{t+1}+\gamma r_{t+2}+...| S_t=s]
Vk+1 (s) =∑a∈aπ (A|s) (RAS+Γ∑S′∈SPASS′VK (s′)) v_{k+1} (s) =\sum_{a \in \cal{a}}\pi (a|s) (\cal{r}_s^a+\gamma \sum_{s ' \in s} \cal{p}_{ss '}^a v_k (S '))
Improve strategies with greedy algorithms:
Π′=greedy (vπ) \pi ' =greedy (V_\PI)
According to the greedy algorithm, each strategy is selected in this state qπ (s,a) q_\pi (s,a) the maximum corresponding action:
π′=argmaxa∈aqπ (s,a) \pi ' =\mathop{argmax}_{a \in \cal{a}}q_\pi (s,a)
Q