David Silver Intensive Learning 6:value Function approximation__reinforcement

Source: Internet
Author: User
First, introduce

To find a RL method that adapts to the real situation (a large state space), the previous value function represents the Q (s,a) by a Sxa table (a table). When the state space is large, this means that the memory footprint is too large, and it is too slow to learn the value function of each of the States alone. And when you encounter a state that you have never seen before, performance can be very poor (lack of generalization ability). second, the value function approximation-incremental Online

The parameterized Value function V ' (S,W) is used to approximate V (s), or Q ' (s,a,w) approximation Q (s,a). Commonly used methods are: characteristic linear combination, neural network and so on. Then we need to constantly optimize this approximation function.

In the process of training approximation function, we should pay attention to the properties of data non-stationary and NON-IID. Gradient descent, with the real value function vπvπ and Estimated value function of MSE as objective, with GD for training. (here with the V function for example, the Q function is the same) J (W) =eπ[(vπ (s) −v′ (s,w)) 2]j (W) =eπ[(vπ (s) −v′ (S,W)) 2]

However, in the RL, the true value function is not known, so in practice we use different target in different methods:

In the MC, Target is GTGT, and in TD (0), Target is rt+γv′ (st+1,w) rt+γv′ (ST+1,W); in TD (Λ), Target is gλtgtλ.

Therefore, we use the approach of the strategy evaluation process, the equivalent of using MC or TD target, and GD training method to get a similar function with the real value function. third, the value function approximation-batch method

The state value pairs are sampled from the chaotic dataset D and then used to optimize the approximation function. The optimization objective here is the MSE experience expectation (mean value) of all samples.

Random sequence + sampling, weakening the original correlation between the samples.

This section describes the content of the experience replay used in DQN.

The part used in the second DQN is the fixed q-target, which uses two identical neural networks, but the network parameters that compute target are older and periodically update the parameters from the Learning Network. Because target is always updated, it will not stabilize. This trick in fact the theoretical basis is not strong, mainly in practice, the effect is relatively good.

From the final table can be seen, DQN training is very dependent on these two trick, without these two changes, the effect is not good.

Original address: Http://cairohy.github.io/2017/09/04/deeplearning/%E3%80%8ADavid%20Silver%E5%BC%BA%E5%8C%96%E5%AD%A6%E4%B9 %a0%e5%85%ac%e5%bc%80%e8%af%be%e3%80%8b-6%ef%bc%9avalue%20function%20appro/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.