The past spring Festival has allowed programmers to have a rare holiday break, but artificial intelligence in the holiday has been improving, we saw the Facebook AI director Yann LeCun, the Hong Kong University of Science and Technology, Director of the Department of Computer and Engineering Yangqiang and other artificial intelligence Daniel's cool thinking about the upsurge of artificial intelligence, Google has also seen the development of artificial intelligence gaming systems that transcend human levels in specific conditions. Here's a look at the new Year's inspiration from Daniel's artificial intelligence.
Yann LECUN:IBM True North is the "straw tribe Science" unsupervised learning is the future
Facebook's AI director Yann LeCun and IEEE Spectrum's Lee Gomes had a deep talk about deep learning, about the current hype in the field of artificial intelligence and the direction of further learning, He argues that the analogy between deep learning and the brain gives it some magical aura that could lead to artificial intelligence in the winter. Micro-credit "machine Heart" has translated the dialogue into Chinese, and now extracts some of the main viewpoints of Yann LeCun in the translation as follows:
IBM True North is the "straw science"
Spectrum: You seem to have been doing your best to pull your work away from neuroscience and biology. For example, you mentioned the convolution network, not the convolution neural network. You mentioned "unit/Individual" (units) in your algorithm, not "neuron".
LeCun: That's true. Some parts of our model derive inspiration from neuroscience, but there are quite a few that have nothing to do with neuroscience, instead they come from theory, intuition and experiential exploration. Our model does not want to become a model of the brain, nor do we claim the relevance of neuroscience. But at the same time, I can accept that the convolution network is inspired by some basic knowledge of the visual cortex. Some people are indirectly inspired by neuroscience, but he refuses to admit it, which, I admit, is very helpful. But I will be careful not to touch the words that spark hype, because there's been a frenzy of hype in this area, which is dangerous. Because it brings expectations to foundations, the public, potential customers, startups, and investors, they will believe that we are in the spotlight-we are building a system that is as powerful as the brain, but in reality we are far from that goal. This can easily lead to another "winter cycle".
There will be some "straw-cult Science" (cargo), under "The Science of Straw people", you often copy the appearance of the machine, but do not deeply understand the principle behind the machine. Or, in aviation, you would make a plane that would reproduce the bird's appearance, its feathers, its wings, and so on. People in the 19th century enjoyed doing so, but the results were very limited.
This is also true in the field of artificial intelligence, where they try to replicate all the details of neurons and synapses we know, and then start a huge network of analog neural networks on a supercomputer, hoping to nurture artificial intelligence, which is the artificial intelligence of "the Science of the Straw". There are a lot of serious researchers who get a lot of fund support and are basically going to believe that.
Spectrum: Do you think IBM's True North project (IBM's human brain chip, which integrates 5.4 billion silicon transistors, 4,096 cores, 1 million "neurons" and 256 million "synapses") belongs to the "straw man science"?
LeCun: That sounds a little harsh. But I do think that what the IBM team claims is a bit biased and misleading. On the face of it, their announcements are impressive, but they don't actually achieve anything of value. Before True north, the team used IBM's supercomputer to "simulate a mouse-level brain", but it was just a random neural network that didn't play any role except to consume CPU cycles.
True The tragedy of the North chip is that it could have been useful if it had not adhered too close to biology and had not used the "spiking integrate-and-fireneurons" model. So in my opinion--I was a chip designer--when you're developing a chip, you have to be sure it can do something useful. If you build a convolution network chip--and know exactly how to do it--it can be applied immediately to computing devices. IBM has created the wrong thing and we can't use it to do anything useful.
Spectrum: Are there any other examples?
LeCun: Fundamentally, much of the EU's Human-brain program (Human Brain project) is based on the idea that we should build a chip that simulates neuron function, the closer the better, and then the chip is used to build supercomputers, and when we use some learning rules to open it, Ai appeared. I think it's sheer nonsense.
Admittedly, I was referring to the European Union's human-brain program, not everyone involved in the project. Many people are involved in the project simply because it receives huge amounts of funding, which they cannot refuse.
Unsupervised learning is the future
Spectrum: How much of the machine learning in the general sense is still to be discovered?
LeCun: Too much. The learning style we use in the actual deep learning system is limited. In fact, it is "supervised learning" that plays a role in concrete practice. You show a picture to the system and tell it it's a car, and it adjusts its parameters accordingly and says "car" the next time. Then you show it a chair and a man. After hundreds of examples, a few days to a few weeks of computing time (depending on the size of the system), it is clear.
But humans and animals are not the way to learn. When you were a baby, you were not told the names of all the objects you saw. But you can learn the concept of these objects, you know the world is three-dimensional, when I put the object behind the other, you still know it exists. These concepts are not innate, you learn them. We call this type of study "unsupervised" learning.
In the middle of the 2000s, many of us were involved in the deep learning Renaissance, including Geoff Hinton, Yoshua Bengio and myself--this is called the "deep Learning community"--and Andrew Ng, The idea of using unsupervised learning instead of supervised learning began to rise. Unsupervised learning can help a specific depth network carry out "pre-training". We have made a lot of progress in this area, but the last thing we can apply to practice is the excellent supervised learning that we did in the past 20 years ago (1980s).
But from a research point of view, we have always been interested in how to properly do unsupervised learning. We now have a practical unsupervised technology, but the problem is that we just need to collect more data and we can beat it with supervised learning. This is why in the current stage of the industry, the application of in-depth learning is basically supervised. But it will not be that way in the future.
In essence, the brain is far better than our model in unsupervised learning, which means that our AI learning system lacks many basic principles of biological mechanism learning.
The next Frontier topic is NLP.
Spectrum:facebook recently unveiled a face recognition algorithm DeepFace, many reports say the accuracy of face recognition technology is close to people. But aren't those results running out of a well-planned database? If you encounter random images on the internet, can the system report achieve the same success?
LeCun: It is certain that the system is more sensitive to picture quality than to humans. The computer system has few advantages in recognizing the faces of many different people through different facial whiskers. But the system can identify a person in a very large set of human beings, a collection that goes far beyond human processing power.
Spectrum: What is the depth of learning performance in areas beyond image recognition, especially when it comes to generic intelligence-related issues such as natural languages?
LeCun: A large part of our work on Facebook is focused here. How do we combine the advantages of deep learning with the ability to learn to portray the world, the ability to accumulate knowledge from ephemeral signals (accompanied by language), reasoning, and the ability to store knowledge in different ways from the current depth learning system? Under the current depth learning system, like learning a sports skill, We train them in the same way that we learn to ride bikes by ourselves. You have learned a skill, but in fact it does not involve a great deal of factual memory or knowledge.
But the other things you learn, you have to ask you to remember the facts, you have to remember and store something. In Facebook, Google and many other places, we do a lot of work by building a neural network and building a separate memory module that can be used in areas such as natural language understanding.
We begin to see that deep learning through memory modules has helped to achieve impressive results in natural language processing. The system is based on the idea of using continuous vectors to describe words and sentences, to transform these vectors through multi-level layers of the deep architecture, and to store them in a federated type of memory. This is very effective for questions and answers and language translation. An example of this model is the storage network (Memory receptacle), which was recently proposed by Facebook scientists Jason Weston, Sumit Chopra and Antoine Bordes. Google/deep Mind scientists also presented a related concept of "neural Turing" (Neural Turing Machine).
Spectrum: So you don't think that deep learning will be the key to unlocking generic AI?
LeCun: It will be part of the solution. To some extent, this solution looks like a huge and complex neural network. But that's a lot different from what people have seen so far in the literature. I said these things, you can start to see some relevant papers. Many people are studying so-called "periodic neural networks" (recurrent neural nets). In these neural networks, the output is fed back to the input, so that you can form an inference chain. You can use this to place sequence signals like voice, audio, video and language, and the initial results are pretty good. The next frontier subject of deep learning is natural language understanding.
Spectrum: If everything goes well, can we expect the machines to do something that they can't do now?
LeCun: You may see better speech recognition systems, but in a way they are hidden. Your digital companion will become more perfect; there will be a better question and answer system; You can talk to your computer, you can ask the computer and it will find answers from the Knowledge Base; machine translation will be more accurate; You can also see self-driving cars and smarter robots, and self-driving cars will use convolution networks.
How to get common sense from machines?
Spectrum: The Winograd schemas challenge of improving Turing testing involves not only natural language and common sense, but also understanding of the mechanism of modern social operation. What can computers do to meet these challenges?
LeCun: The key to this question is how to express knowledge. In "traditional" artificial intelligence, factual knowledge is manually entered in the form of a graphic (a set of symbols or entities and interrelationships). But we all know that AI systems can acquire knowledge automatically by learning. So the question becomes, "How can machines learn to express knowledge about facts and relationships?"
Deep learning is undoubtedly part of the solution, but not all. The problem with symbols is that it's just a bunch of meaningless bits, and in a deep learning system, the entities are represented by large vectors that are learned from the characteristics of data and reactions. Learning inference is due to the learning of the functions that make these vectors. Facebook researchers, Jason Weston, Ronancollobert, Antonine Bordes and Tomas Mikolov, have taken the lead in trying to use vectors to express words and languages.
Spectrum: One of the classic questions about artificial intelligence is getting machines to get common sense. What's your opinion on this subject in the Depth study field?
LeCun: I think you can get some common sense by using predictive unsupervised learning. For example, I could have the machine watch a lot of video about objects being thrown or dropped. The way I train it is to show it a video and ask it: "What happens next? What will the picture be after a second?" Training the machine in this way predicts what the world will be like after a second, a minute, an hour, or a day, and it will get a good picture of the world. This will allow the machine to understand the many limitations of the physical world, such as "the object thrown into the air will fall after a period of time", or "an object cannot be in two places at once" or "the object is blocked and still exists". Understanding the limitations of the physical world will enable machines to "fill gaps" and predict the state of the world after being told stories that contain a series of events. Jasonweston, Sumit Chopra and Antoine Bordes are using the "Memory network" I just mentioned to build such a system.
Yangqiang: Relying on computational power and large data can lead to artificial intelligence in the winter?
Professor Wunda, Baidu's chief scientist, said: "The combination of large data, the new artificial intelligence algorithm is getting better, we can be the first time in the future artificial intelligence virtual circle to complete the entire cycle." "Indeed, with the rise in computing power and the fall in computational costs, big data has driven the current artificial intelligence" summer ", But Yangqiang, director of computer and engineering at the Hong Kong University of Science and Technology, who has long studied artificial intelligence and large data, has rethought another hidden danger of AI dependence after the winter Davos in 2015: Will winter be far behind when summer comes?
Professor Yangqiang in the collective carnival of Artificial intelligence, calmly analyzed:
Now the achievements of artificial intelligence are also focused on the interface between people and computers: voice, vision, text. However, the highest manifestation of human intelligence is abstract reasoning and association, which allows us to relate from one event to another, from one knowledge to another. The above so-called "strong artificial intelligence" ability, is not by a large number of single direction of the "weak artificial intelligence" of a lot of superposition can be obtained? Nowadays, this interdisciplinary learning ability of computers is far from being realized, even the entrance to the temple of knowledge is unknown. The main reason is that our computational power has not yet been strong enough, and in these areas there is still no large data to fully reflect the human mind. Our learning algorithms also require countless large data to provide the "fuel" that the AI machine operates. "The preparation of these large data needs to be provided by expensive labor and not a snowball-scale effect." These flaws are likely to hinder us from getting the tools of true intelligence, which makes Dr Wunda's "virtual closed loop" a big gap.
The fatal thing about these gaps is the embarrassment that we faced with the lack of petrol in the winter of 30 years ago: just today in front of us is a beautiful Tesla, but how can not find the Inga station!
I believe that IBM's Watson, Baidu's Minwa computing platform there is still a lot of room for improvement, but to support the endless data calculation is still easy to reach the bottleneck, let alone to find a comprehensive response to the large data of human thinking is still very difficult.
So, let us in the study of the application of deep learning, but also in good time to think about Professor Yangqiang's question: "We will not be in the group of artificial intelligence to ignore the most essential things, so that accidentally crossed to more than 30 years ago that artificial intelligence winter?"
google:dnn+ Intensive learning allows the complex tasks of AI to approach humans
Google DeepMind's AI relies on self-study (looking for a pattern by watching video of the game, then manipulating the controller, getting feedback on the score, and constantly adjusting its control in the feedback) to play 49 games and even beat the human pro in 23 games. The Google DeepMind team published a paper in Nature magazine announcing how the game's more powerful AI implementations:
The core of DeepMind's AI design is how to get the computer to discover the patterns that exist in the data. The solution is the combination of deep neural network and intensive learning. AI do not know the rules of the game, but use a deep neural network to understand the state of the game, to find out which behavior can lead to the highest score.
This is due to the fact that the increase in computing power now allows the AI to handle a much larger data set, to see that the Atari game is equivalent to processing 2 million pixels per second of data. On the other hand, it benefits from DeepMind combined with intensive learning to train AI, and is the end-to-end intensive learning that is used in high dimensional sensory input. Compared to the previous computer games, such as chess, this computer game is closer to the real world of chaos. Demis Hassabis, Google's smart designer, says this is the first algorithm to perform in a series of complex tasks with humans.