Professor Yangqiang's speech at the launching ceremony of the 2016 Global Human Technology Congress (GAITC) and artificial Intelligence 60 memorial Event _ Academic Frontier
Source: Internet
Author: User
Author: Long
Link: http://www.zhihu.com/question/46485555/answer/101551275
Source: Know
Copyright belongs to the author. Commercial reprint please contact the author to obtain authorization, non-commercial reprint please indicate the source.
Professor Yangqiang pointed out that the combination of search and learning is the development direction of artificial intelligence. We can not rely entirely on the machine to fully automated self-learning, machine learning is the disadvantage of self deviation, there is still need for human intervention. In the future, migration learning will be the solution to this problem. Migration learning also allows AI to get rid of the heavy reliance on large data
The success of the deep neural network allows us to observe the same data at different levels, and we can get what we call the bigger picture.
Intensive learning should be said to be a powerful tool for artificial intelligence planning, but not the only tool. The relative depth of study in this field should be more ancient and more intensive. But in a long period of silence, the reason is because it has a large computational bottleneck, can not have a large amount of data. One example is that intensive learning can only solve some toy-type problems for a long time, very small data.
The recent deepmind of Google, which combines deep learning with intensive learning, is a topic that makes many of the bottlenecks that need to be broken in the reinforcement of learning that the number of States can be hidden. This concealment allows intensive learning to deal with data on a large scale. It highlights the point called end-to-end Learning, so reinforcement learning is also the next breakthrough.
Alphago's revelation to us is that we combine [search and learning] to be a complete intelligent machine. This we can call the generality of artificial intelligence, which means we have some kind of combination of these two technologies, for example a little bit more search, less machine learning.
At present we can not rely entirely on machines to fully automate self-learning, at least until now we have not found such a path. Because machine learning has a very serious phenomenon, is the ego deviation, this kind of deviation can be embodied in an important concept of statistics, is that the data we obtained may be a biased data, we may have built a model, most of the data is useful, but there are some special cases. How we deal with these exceptions, and how we handle the deviations between our training and application data, is what we're going to look at next.
A very promising technique is called migration learning, and the depth model can reverse it and become a model of generation. It can not only make a decision on the data, it can also generate new data of those images to describe
If we reach the point of migration learning, we want to ask whether the next step is to string together all the learning tasks that humans have experienced along the timeline, to make machines like humans, learning abilities and intelligence growing over time. Then it needs to study the degree of effort, the number of samples is also gradually reduced. And that's one of the ways we're trying.
A recent article has also illustrated the importance of migration learning. This article is called Bayesian Program Learning (single case study), which can be learned from an example, we know that there are thousands of examples of deep learning. In fact, it uses a concept that we don't have in the past, called a structure, and if we understand the structure of a problem, then a concrete form of the structure can be learned with just one example. The other part, which requires a lot of examples, may be parameters, statistics, which we can actually learn by migrating. That means the circle is complete, and it's a closed loop. To see so many artificial intelligence, there are failures, when there is success, we can now sum up what experience. I think the current success of artificial intelligence can not be separated from the high quality of large data, but not the future of artificial intelligence success must be large data. Then we should ask whether in the future there are small data can also make artificial intelligence successful. Large data opening, more applications and more computing power really come from industry. Talent training, small data research depends on the academic community. The combination of the two is a direction for our future development.
Intensive learning is more useful than you might think, not just in Weiqi or in computer games. In finance, in our daily life, even in education, robot planning is inseparable from intensive learning.
What we are going to see tomorrow is migration learning, because migration learning allows us to migrate the model of large data to small data, so that millions of people can benefit and everyone can enjoy the dividends of artificial intelligence.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.