Preface
For the time being, many of the methods in deep reinforcement learning are based on the previous enhanced learning algorithm, where the value function or policy Function policy functions are implemented with the substitution of deep neural networks. Therefore, this paper attempts to summarize the classical algorithm in reinforcement learning.
This article mainly refer to:
1 Reinforcement Learning:an Introduction
2 Reinforcement Learning Course by David Silver 1 Preliminary knowledge
Understanding of reinforcement learning, knowing the Mdp,bellman equation
Detailed visibility: Deep reinforcement Learning Basics (DQN aspects)
Many algorithms are based on solving Bellman equations: Value Iteration Policy Iteration q-learning Sarsa 2 Policies Iteration Strategy iterations
The purpose of policy iteration is to make the policy converge to the best by iterating over the method of calculating value function values.
Policy iteration is essentially a direct use of the Bellman equation:
Then the policy iteration is generally divided into two steps: Policies Evaluation strategy evaluation. The purpose is to update the value Function policy improvement strategy improvements. Use greedy policy to generate a new sample for the first step of the strategy evaluation.
Essentially, a new sample is generated using the current policy, and then the current policy is updated with the new sample, and then repeated. Theory can prove that the final strategy will converge to the optimal.
Specific algorithm:
The policy Evaluation Section should be noted here. One of the important iterations here is the need to know the state transition probability p. In other words, it relies on model models. And the algorithm will iterate until the convergence is repeated. So there is a general need to make restrictions. For example, to a certain ratio or number of stops iteration. 3 Value Iteration values Iteration
The Value iteration is obtained by using the Bellman optimal equation.
Then change into an iterative form
The algorithm for value iteration is as follows:
So the question is: what is the essential difference between Policy iteration and value iteration. Why a call policy iteration, a call value iteration it.
The reason is very well understood, policy iteration uses the Bellman equation to update value, and the last convergent value is Vπv_\pi is the value of the current policy (so called policy evaluation), The goal is to get a new policy for the latter policy improvement.
The value iteration is used to update value using the Bellman optimal equation, and the last convergent value is v∗v_* is the optimal value in the current state. Therefore, as long as the final convergence, then the best policy will be obtained. So this method is based on the update value, so it is called value iteration.
From the above analysis, value iteration is more direct than policy iteration. But the problem is the same, you need to know the state transfer function p to calculate. Essentially dependent on the model, and in the ideal condition it is necessary to traverse all States, which is almost impossible on a slightly more complex issue. 4 Asynchronous Update issues
The core of the above algorithm is to update the value of each state. It is possible to implement asynchronous updates by running multiple instances while sampling samples.
And based on the idea of asynchronous update, DeepMind out a good paper:
Asynchronous Methods for deep reinforcement learning
This article has greatly improved the effect of Atari game. 5 Summary
Reinforcement learning has many classical algorithms, many of which are based on the above derivation. In view of the space problem, the next blog analysis based on Monte Carlo algorithm.