"Editor's note" At present, the major technology giants, including Google, Microsoft and so on are vigorously developing in-depth learning technology, through various ways to dig deep learning talent, Mark Zuckerberg appointed Yann LeCun as director of the Facebook Artificial Intelligence Laboratory. These High-tech companies are exploring a special form of depth learning-convolution neural networks, which lecun more than others for visualizing convolution neural networks.
The following is the original text:
Mark Zuckerberg carefully selected deep learning expert Yann LeCun as head of the Facebook Artificial Intelligence Laboratory. The laboratory was established at the end of last year. As a professor of New York University, Yann LeCun has a successful research on deep learning and won the Neural Network Pioneer Award in IEEE World Computing Intelligence Conference. Deep learning, as a form of artificial intelligence, aims to mimic the human brain more closely. Initially, most AI researchers openly scoffed at deep learning, but in a few short years it suddenly spread across the high-tech landscape, spanning Google, Microsoft, Baidu and Twitter.
These High-tech companies are exploring a special form of deep learning--convolution neural networks--designed to create Web services that automatically understand natural languages and identify images. Google's Android phone's speech recognition system is based on neural network development. Baidu uses neural networks to develop a new visual search engine. There are not a few scholars who study deep study, but it is successful and lecun. "For visual convolution neural networks, LeCun has paid far more than others," said Leon Bottou, a Microsoft machine-learning expert, who worked with LeCun early on. ”
Yann LeCun, director of the Facebook AI Lab
Faced with huge doubts, LeCun still has a nerve network. A powerful computer and a large dataset are needed to make the neural network work, but in the early 80 's, LeCun was not able to support this new field. At the time of the computer age, scientists eagerly expected the artificial intelligence, but the neural network was limited by the conditions at that time, unable to satisfy the scientist's vision, and therefore was not optimistic. It is difficult to publish articles related to neural networks in authoritative academic journals. This situation has not improved in the 90 's or even early 21st century.
But LeCun still persists. "He's like a torch in the dark," said Geoffrey Hinton, the core scholar of deep learning. "At last, computer technology is moving forward, providing the necessary technical support for deep learning and its potential development."
LeCun's Lenets
In the more than 20 years before joining Facebook, LeCun worked in Bell Labs, and during that time he developed a system to identify handwritten numbers, called lenet. Bell Labs, the world's most famous computer research laboratory, is the birthplace of transistors, Unix operating systems, and C language.
Lenet can automatically read bank checks, which marks the first time that convolution neural networks have been applied in practice. Bottou said, "The convolution network is like a small toy, LeCun will be used in a wider range of practical problems." ”
In the early 70 and 80 's, Cognitive machines (Cognitron) and neuro-cognitive machines (Neocognitron) were able to learn to recognize graphics from data independently, without the need for human cues. But such models are quite complex, and researchers cannot fully figure out how to make them work correctly. "There was a lack of a supervised learning algorithm that we now call a reverse-propagation algorithm (back propagation)," LeCun said. This algorithm can effectively minimize the error rate.
Convolution neural network
Convolution networks are composed of interconnected convolution layers, which are similar to the visual cortex that processes visual information in the brain. The difference between convolution networks is that they can reuse the same filters from multiple locations in a single image. For example, once a convolution network learns to recognize a person's face in a certain position, it can also automatically recognize a person's face in another location. This principle also applies to sound waves and handwritten text.
Andrew Ng, director of Baidu Research Institute, Wunda that this allows artificial neural networks to be trained quickly, because "memory footprint is small and there is no need to separate the filters for each location in the image so that the neural network is ideal for creating extensible deep Web (deep nets)". This also makes convolution neural networks have the advantage of being good at identifying graphics.
When the convolution neural network receives the image (i.e. input), it converts it to a digital array representing the feature, and adjusts the "neuron" in each convolution layer to identify some figures in the number. Lower-level neurons can recognize basic shapes, while advanced neurons can identify more complex forms such as dogs or humans. Each convolution layer is interoperable with the adjacent layer, and the average is obtained when the information is propagated across the network. Finally, the network obtains output by guessing what graphics are in the image.
If there is a network error, the agent can fine-tune the connection between the layer and the layer to get the correct answer. and neural networks can fine-tune themselves, and thus better than one. In this case, the reverse propagation algorithm began to play a role.
Reverse propagation algorithm
The principle of the reverse propagation algorithm is to calculate the error and to update the intensity of the convolution layer according to the error. In the middle of the 80 's, David Rumelhart, Geoffrey Hinton and Ronald Williams proposed a reverse-propagation algorithm that simultaneously calculates errors for multiple inputs and averages them. The average error is then transmitted back through the network from the output layer to the input layer.
LeCun the idea of a reverse transmission algorithm is different from the above, he does not take the average, but calculates the error for each sample. His method worked well and was faster.
According to Bottou revealed that LeCun came to this approach, is actually yincuoyangcha results. "The computers we used in France were not very power," he said. "They have to figure out how to compute the error as quickly as possible with as little computer configuration." That seemed to be a muddle at the time, but it is now an important part of the Artificial Intelligence toolkit. It is the random gradient descent algorithm (stochastic gradient descent).
LeCun's lenets has been widely used in automatic teller machines and banks around the world to identify handwritten handwriting on cheques. But there are still people who are skeptical. "The progress we have made is not enough to convince the computer vision domain to recognize the value of convolution neural networks," LeCun said. "Part of the reason is that although convolution neural networks are powerful, no one knows why it is so powerful." It has not yet been possible to uncover the underlying principle of this technical mystery.
The prospect of deep learning
Criticism is a sound. Support Vector machines (Support vector Machine) founder and mathematician Vladimir Vapnik also held critical positions. Support Vector Machine (SVM) is one of the most widely used artificial intelligence models.
One afternoon in March 1995, Vapnik and Larry Jackel two men. Jackel that by 2000, the intrinsic principles of deep artificial neural networks (deep artificial neural nets) would be clarified. Vapnik insisted on delaying the deadline to 2005. They also seriously put the stakes on paper, and in front of several witnesses signed. LeCun and Bottou were both present.
Bet the two sides will finally be difficult to solve. In the 2000, the core principles of neural networks remain shrouded in mystery, and even now researchers are unable to fathom the mysteries with mathematical methods. Deep neural networks have been widely used in ATMs and banks in the 2005, although the core principles have not yet been mastered, but the research work of LeCun in the middle and early 90 of the last century has laid an important foundation for the decryption of deep neural networks.
LeCun points out, "very few technology can be published 20 or 25 years later, although basically unchanged, but in the test of time is proved to be the most outstanding." The speed with which people accept it is amazing. I've never met a situation like this before. ”
The most widely used convolution neural networks are now almost entirely dependent on supervised learning (supervised learning). This means that if you want the neural network to learn how to recognize a particular object, you must annotate several samples. Unsupervised learning (unsupervised learning) refers to learning from unlabeled data, which is closer to the way the human brain learns. At present, some researchers in depth study are exploring this field.
"We are almost completely unfamiliar with how the brain learns," says LeCun. Neuronal synapses have been known to adjust themselves, but the mechanism of the neocortex is not yet clear. We know that the final answer is unsupervised learning, but it is not able to answer. ”
The reverse-propagation algorithm is unlikely to reflect the workings of the human brain, so researchers are exploring other algorithms. In addition, convolution networks are not perfect for collecting data or calculating averages, so current researchers are trying to make improvements. "The convolution network loses information," Hinton says. ”
Take the human face for example. If the system learns to recognize facial features such as eyes and lips, it can effectively identify someone in the image, but cannot distinguish between different faces. It also fails to pinpoint the exact position of the eye on the face. High-tech companies and governments want to create detailed digital files about users or residents, and the flaws mentioned above will become an unavoidable short plate.
LeCun's research may not be perfect, but it is now a cutting-edge theory in this field.
Original link: Deep learning in the giant circle popular Facebook brain depends on it (Zebian/Wei)
Free Subscription "CSDN cloud Computing (left) and csdn large data (right)" micro-letter public number, real-time grasp of first-hand cloud news, to understand the latest big data progress!
CSDN publishes related cloud computing information, such as virtualization, Docker, OpenStack, Cloudstack, and data centers, sharing Hadoop, Spark, Nosql/newsql, HBase, Impala, memory calculations, stream computing, Machine learning and intelligent algorithms and other related large data views, providing cloud computing and large data technology, platform, practice and industry information services.