Strengthening the Off-policy and on-policy_ in learning and strengthening the learning notes

Source: Internet
Author: User

Intensive learning can be divided into off-policy (off-line) and on-policy (online) Two learning methods, according to individual understanding, Determining whether an intensive learning is Off-policy or On-policy is based on the fact that the policy (Value-funciton) of the generated sample is the same as the policy (Value-funciton) when the network parameter is updated.

Off-policy's classical algorithm has q-learning, and On-policy's classical algorithm has SARSA algorithm, both of the algorithm flow is shown below.

Q-learning algorithm:

Initialize Q (s,a) randomly for each
episode:
    initialize state S;
    While S isn't terminal:
        Choose Action A from S usingε-greedy strategy;
        Observe reward R and next state s ';
        Q (s,a) <-Q (s,a) +α[r +γ*maxq (s ', a ')-Q (s,a);
        s <-s ';]

Saras algorithm:

Initialize Q (s,a) randomly for each
episode:
    initialize state S;
    Choose Action A from S usingε-greedy strategy;
    While S isn't terminal:
        observe reward R and next state s ';
        Choose a ' from S ' usingε-greedy strategy;
        Q (s,a) <-Q (s,a) +α[r +γ*q (s ', a ')-Q (s,a);
        s <-S ', a <-a ';]

The process of the two algorithms is basically the same, and the only difference is the Q function update:

Q-learning in the calculation of the expected revenue of the next state using the max operation, direct selection of the optimal action, and the current policy does not necessarily choose the optimal action, so the sample generated here policy and learning policy different, for the off-policy algorithm;

And Saras is based on the current policy directly to perform an action selection, and then use this sample to update the current policy, so the generation of samples policy and learning policy the same, the algorithm is On-policy algorithm.

The experience-replay mechanism used in the recent deep reinforcement study, which separates the generated samples from the trained samples, is likely to have been different from the previous one, using a policy-generated sample for training, So the DRL algorithm using experience-replay mechanism is basically off-policy algorithm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.