Why Study Reinforcement Learning
Reinforcement Learning is one of the fields I ' m most excited about. Over the past few years amazing results like learning to play Atari Games from Raw Pixelsand Mastering the Game of Go have Gotten a lot of attention, but RL is also widely used in robotics, Image processing and Natural Language processing.
Combining reinforcement Learning and Deep Learning techniques works extremely. Both fields heavily influence each of the other. On the reinforcement Learning side Deep neural Networks are used as function approximators to learn good representations, e.g. to process Atari game images or to understand the board state. In the other direction, the RL techniques are making their way to supervised problems usually to tackled by Deep Learning. For example, RL techniques are used to implement attention mechanisms in image processing, or to optimize long-term s in conversational interfaces and neural translation systems. Finally, as reinforcement Learning is concerned with making optimal decisions it has some extremely interesting To Human psychology and neuroscience (and many other fields).
With lots of the open problems and opportunities for fundamental I'll be seeing multiple reinforcement Learn ing breakthroughs in the coming years. And what could be more fun than teaching machines to play StarCraft and Doom? How to Study reinforcement Learning
There are many excellent reinforcement Learning resources out There. Two I recommend the most are:david Silver ' s reinforcement Learning Course Richard Sutton ' s & Andrew Barto ' s reinforce ment learning:an Introduction (2nd Edition) book.
The latter is still work in progress but it ' s ~80% complete. The course is based on the book so the two work quite. In fact, this two cover almost everything you need to know to understand most of the recent. The prerequisites are basic Math and some knowledge of Machine Learning.
That covers the theory. But What about practical resources? What about actually implementing the algorithms that are covered in the book/course? That "s where this post and The github repository comes in. I ' ve tried to implement most of the standard reinforcement algorithms using Python, openai Gym and. I separated them into chapters (with brief summaries) and exercises, and solutions so, can use them to supplement T He theoretical material above. all of the ' is ' in the Github repository.
Some of the more time-intensive algorithms are still work and progress. I ' ll update this post as I implement them. Table of Contents Introduction to RL problems, OpenAI gym MDPs and Bellman equations Dynamic programming:model-based RL, Policy iteration A nd Value Iteration Monte Carlo model-free Prediction & Control Temporal Difference model-free Prediction & Control Function approximation Deep Q Learning (WIP) Policy gradient Methods (WIP) Learning and Planning (WIP) Exploration and Ex Ploitation (WIP) List of implemented algorithms
Dynamic Programming Policy Evaluation
Dynamic programming Policy Iteration Dynamic Programming Value Iteration Monte Carlo prediction Monte Carlo control with E Psilon-greedy Policies Monte Carlo Off-policy control and importance sampling Sarsa (on Policy TD Learning) q-learning (O FF Policy TD Learning) q-learning with Linear Function approximation Deep q-learning for Atari Games Double deep-q G for Atari Games Deep q-learning with prioritized experience Replay (WIP) Policy Gradient:reinforce with Baseline Policy Gradient:actor critic with Baseline Policy Gradient:actor critic and Baseline for continuous Action spaces determinist IC Policy gradients for continuous Action spaces (WIP) Deep deterministic Policy gradients (DDPG) (WIP)
Asynchronous Advantage Actor Critic (A3C) (WIP) source:http://www.wildml.com/2016/10/learning-reinforcement-learning/