Objective
OpenAI is an AI company founded at the end of 2015, led by Elon Musk, claimed to have a 1 billion-dollar investment, composed of several top players in artificial intelligence. This basically means a new DeepMind company was born, but this time the OpenAI is an organization that does not belong to any company.
Why do you want to know OpenAI?
Because OpenAI's research largely represents the research direction of AI, because of its non-profit nature and the prime location in California's Silicon Valley, there is a great chance that more top talent will be gathered in the future, and it is very likely to compete with DeepMind companies. The advent of OpenAI will not allow the study of top-level AI to be monopolized by Google (mainly Google, and certainly microsoft,facebook,baidu,ibm,nvidia, and so on).
OpenAI Website: www.openai.com
OpenAI Ama:ama website
Many members of OpenAI are probably more familiar with Hinton students, Li Feifei students, Pieter Abbeel students, Andrew Ng's disciple. Ian Goodfellow Daniel also recently joined, deeplearning that book is his lead author.
The most important thing to know about OpenAI is to understand the frontiers of AI research.
What is the research direction of Ai's frontier?
OpenAI raised three points:
-Training Generative Models
-Algorithms for inferring algorithms from data
-New approaches to reinforcement learning
So what do these three categories represent, respectively?
Deep generative Models
The first type is oriented to the generation model, the main task is to generate new information, both supervised learning and unsupervised learning. such as Sequence to Sequence learning. Translate: Input English, output Chinese. Chat: Enter a dialog, Output b dialog. Enter text and print the handwriting font. There are also auto-generated text (e.g., from otoro.net), music, art (deep dream,neural art) ... It also contains the one shot learning. That is, by looking at an image, it derives its variants, such as:
So what is the significance of this research? My point is to explore the perceptual abilities of AI. On the one hand is the perception of different types of data, on the one hand, rapid perception and learning. The perception of different types of data. Previously only image recognition, now began to identify artistic features, but also can recognize the characteristics of text messages for translation dialogue and so on. Then we find that RNN is simply invincible and what information can be extracted automatically. For the same SEQ2SEQ network, used in translation, chatting, understanding the legend of the Furnace Stone card ... RNN can understand any form of content. On the other hand we want to be able to perceive as quickly as humans without the need for a huge amount of training data, that is, to recognize them at a glance.
Learning Algorithm & Neural Turing machine
Essentially based on RNN computers can learn anything, then of course also includes algorithms and programs. So neural Turing machine is designed to allow computers to learn programs and thus have the ability to infer. Give a chestnut: let the computer see a lot of addition, and then learn to add, this is probably the simplest example. But that's basically what it means. Then neural Turing machine needs to have external memory, but RNN,LSTM itself has the memory function. Imagine that the future of the computer really becomes a "brain": a huge neural network to achieve input and output.
The above research problem is to realize the AI stronger perceptual ability, then this kind of problem is more perverted, the direct realization AI not only can understand also can deduce. Of course, essentially the same as the first type of problem. Perception is also an understanding. In the final analysis, it is the extraction of certain characteristics or knowledge information, and the ability to generate. Still using RNN, currently the newest is based on enhanced learning of NTM. That is to say, self-learning also enhances understanding.
This part of the study is also for the proof of the formula, but this time using a neural network to prove the formula.
In fact, as long as the computer can be rnn through the ability to understand, then do what things are the same.
Deep Reinforcement Learning
The above two types of problems mainly depend on the existing knowledge, the purpose is to make AI have a good learning ability. But to make AI transcend human beings, we need to learn from ourselves. Everyone knows that alphago can self-study, the key is the use of enhanced learning reinforcement learning.
Therefore, this part of deep reinforcement learning focuses on the use of enhanced learning to achieve self-learning ability. There are many tasks that do not provide many samples, especially in the field of robot control. This type of problem seriously requires self-learning ability. Which is the analogy for the human ability to exercise. We all know that we have to play basketball well. It takes a long time to practice, not a glance. Therefore, deep reinforcement learning, the ultimate weapon to AGI, is to give AI the ability to learn by itself, given a goal.
Summary
The development of artificial intelligence is beyond imagination, the progress of OpenAI research direction will make AI have stronger learning ability, also can be said to be intelligent level! Three types of research are in fact interdependent, but each has a focus, are very cool. And the source of these things is RNN. It also reminds people of Jurgen Schmidbuber this god Ox.
It will make a lot of sense to pick one of these three directions!
Understanding Point OpenAI and the frontier of deep learning research