proximal Policy optimization algorithms John schulman, Filip wolski, prafulla dhariwal, Alec radford, Oleg Klimov (Submitted on 20 We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data t Hrough interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective functi On that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), with some of the benefits of trust region policy Optimi Zation (TRPO), but they is much simpler to implement, more general, and has better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing , and we show that PPO outperforms other online policy gradient methods, and overall strikes a fAvorable balance between sample complexity, simplicity, and wall-time.
Subjects: |
Learning (CS. LG) |
Cite as: |
arxiv:1707.06347 [CS. LG] |
|
(or Arxiv:1707.06347v1 [CS. LG] for this version) |
Submission HistoryFrom:john Schulman [view email]
[V1]Thu, 02:32:33 GMT (2178kb,d)