Teaching machines to understand us let the machine understand the history of our two deep learning

Source: Internet
Author: User
Tags dnn

Deep history

History of Deep learning

The roots of deep learning reach back further than LeCun ' s time at Bell Labs. He and a few others who pioneered the technique were actually resuscitating a long-dead idea in artificial intelligence.

The root of deep learning is in the LeCun before Bell laboratory studies. He and several others, as pioneers of this technology, are actually re-using a technology that has long been abandoned in artificial intelligence.

When the field is got started, in the 1950s, biologists were just beginning to develop simple mathematical theories of what in Telligence and learning emerge from signals passing between neurons in the brain. The core Idea-still current Today-was the links between neurons is strengthened if those cells communicate frequ ently. The fusillade of neural activity triggered by a new experience adjusts the brain's connections so it can understand it bet ter the second time around.

When the field of neural networks began in the the 1950s, biologists simply wanted to use simple mathematical theories to describe how intelligence and learning were transmitted from one neuron to another in the brain. The core idea (still today) is that if these cells communicate frequently, the connections between neurons will be enhanced. A new experience triggers a violent neurological activity that adjusts the neuronal connections in the brain, so that the second time this happens, it is better understood.

In 1956, the psychologist-Frank Rosenblatt used those theories to invent a-by-making simple simulations of neurons in Software and hardware.  The New York Times announced his work with the headline "Electronic ' Brain ' Teaches itself." Rosenblatt ' s Perceptron, as he called his design, could learn what to sort simple images into categories-for instance, tria Ngles and squares. Rosenblatt usually implemented his ideas in giant machines thickly tangled with wires, but they established the basic Prin Ciples at artificial neural networks today.

In 1956, psychologist Frank Rosnblatt used these theories to invent a method for simple simulation of neurons in software and hardware. The New York Times announced his work with the headline "E-Brain teaches itself something". Rosenblatt, known as the Perceptron, can learn to classify simple images, such as triangles and squares. Rosenblatt usually realizes his ideas in the giant machine that wraps the thread, but they build the basic principles of today's artificial neural network.

One computer he built had eight simulated neurons, made from motors and dials connected to the light detectors. Each of the neurons received a share of the signals from the light detectors, combined them, and, depending on what they a Dded up to, spit out either a 1 or a 0. Together those digits amounted to the Perceptron's "description" of what it saw. Initially the results were garbage. But Rosenblatt used a method called supervised learning to train a perceptron to generate results that correctly Distingui Shed diferent shapes. He would show the Perceptron an image along with the correct answer. Then the machine would tweak how much attention each neuron paid to its incoming signals, shifting those "weights" toward Settings that would produce the right answer. After many examples, those tweaks endowed the computer with enough smarts to correctly categorize images it had never seen Before. Today ' s deep-learning networks use sophisticated algorithms and has millions of simulated neurons, with billions of connections between them. But they is trained in the same.

He built a computer with eight analog neurons, connected to 400 light detectors with a motor and a dial pad. Each neuron receives a portion of the signal from the light detector, merges the inputs of each neuron, and outputs 1 or 0 depending on the total result. These numbers together describe what the perception machine "sees". The results of the output were meaningless at first, but Rosenblatt used a supervised learning approach to train a perception machine to produce results that correctly differentiate between shapes. The machine can then adjust the attention of each neuron to the input signal, which is to adjust the "weight" to the direction that the correct answer can be generated. After many examples, these tweaks make the computer intelligent enough to classify images that have never been seen before. Today's deep Learning network uses a sophisticated algorithm, with millions of simulated neurons, with billions of connections, but their training methods are the same.

Rosenblatt predicted that Perceptrons would soon being capable of feats like greeting people by name, and he idea became a l Inchpin of the nascent field of artificial intelligence. Work focused in making perceptrons with more complex networks, arranged into a hierarchy of multiple learning layers. Passing images or other data successively through the layers would allow a perceptron to tackle more complex problems. Unfortunately, Rosenblatt ' s learning algorithm didn ' t work on multiple layers. In 1969 the AI pioneer Marvin Minsky, who had gone to high school with Rosenblatt, published a book-length critique of per Ceptrons that killed interest in neural networks at a stroke. Minsky claimed that getting more layers working wouldn ' t make perceptrons powerful enough to be useful. Artificial Intelligence researchers abandoned the idea of making software that learned. Instead, they turned to the using logic to craft working facets of Intelligence-such as a aptitude for chess. Neural networks were SHoved to the margins of computer.

The

Rosenblatt predictive perceptron can quickly greet people with a name, and his mind becomes the key to the early days of AI. Work is focused on extending the perceptron to more complex networks, as well as cascading the perceptual machines into layers of learning. Making the image or other data pass through each level successively, which can make the perceptual function deal with more complicated problems. Unfortunately, Rosenblatt's learning algorithms do not work in multi-layered situations. In 1969, the pioneer of artificial intelligence, Marvin Minsky, who went to high school with Rosenblatt, published a book that criticized the Perceptron model in large chunks, thinking that it wiped out people's interest in neural networks at once. Minsky that using more layers does not make the perception machine more useful. AI researchers have abandoned the idea of developing software that can be learned. Instead, they turned to logic to make all aspects of intelligence, such as playing chess. The neural network was thrown to the edge of computer science.

Nonetheless, LeCun was mesmerized when he read about Perceptrons as a engineering student in Paris in the early 1980s. "I was amazed the this is working and wondering why people abandoned it," he says. He spent days at a in the library near Versailles, hunting for papers published before Perceptrons went extinct. Then he discovered this a small group of researchers in the States were covertly working on neural networks again. "This is a very underground movement," he says. In papers carefully purged of words like "neural" and "learning" to avoid rejection by reviewers, they were working on SOM Ething very much like Rosenblatt's old problem on how to train neural networks with multiple layers.

However, when LeCun was a student of engineering in Paris in the early 1980s, he was fascinated by the contents of the perceptual machine. He said: "I am very magical about how this model works, and I don't know why people abandon it." He spent several days in a research library near Versailles, looking for papers published before the perception machine disappeared. Then he discovered that a research group in the United States was again secretly working on neural networks. "It's a very underground activity," he said. The paper has carefully removed words like "nerves" and "learning" so as not to be rejected by the reviewers, who are studying problems that resemble Rosenblatt's old problems and how to train neural networks in multi-layered situations.

LeCun joined the underground after he met it central figures in 1985, including a wry Brit named Geoff Hinton, who now Wo Rks at Google and the University of Toronto. They immediately became friends, mutual admirers-and the nucleus of a small community that revived the idea of neural netw Orking. They were sustained by a belief that using a core mechanism seen in natural intelligence is the only-to build artific Ial Intelligence. "The only method that we knew worked is a brain, so in the long run it had to is that systems something like that could b E made to work, "says Hinton.

LeCun met with his core figures in 1985 and then joined the underground organization, including a man named Geoff Hinton, who now works at Google and the University of Toronto. They immediately became friends, admired each other, and became the nucleus of this small group, dedicated to the idea of reviving neural networks. Maintaining one of their beliefs is that artificial intelligence can only be created by using the core mechanisms of natural intelligence. "The only way we know is the brain, so in the long run, it can only be a brain-like system to really work," Hintion said.

LeCun ' s success at Bell Labs came on after he, Hinton, and others perfected a learning algorithm for neural networks With multiple layers. It is known as backpropagation, and it sparked a rush of interest from psychologists and computer scientists. But after LeCun's check-reading project ended, BackPropagation proved tricky to adapt to other problems, and a new-to Train software to sort data is invented by a Bell Labs researcher down the hall from LeCun. It didn ' t involve simulated neurons and is seen as mathematically more elegant. Very quickly it became a cornerstone of Internet companies such as Google, Amazon, and LinkedIn, which use it to train sys tems that block spam or suggest things for you to buy.

The

LeCun, Hinton, and others perfected the learning algorithms for multilayer neural networks and succeeded in Bell Labs. The algorithm, called the BP algorithm, is the inverse propagation algorithm, which ignites an interest from psychologists to computer scientists. But after LeCun's check-reading project was over, it was found that the inverse propagation algorithm was difficult to apply to other problems, and a researcher at Bell Labs continued the LeCun path, finding a new way to train software to collate data. This has nothing to do with stimulating neurons, and is more mathematically fastidious. It soon became the basis for Internet companies like Google, Amazon and LinkedIn, which used the technology to train the system to filter spam or recommend purchases.

After LeCun got to NYU in 2003, he, Hinton, and a third collaborator, University of Montreal Professor Yoshua Bengio, form Ed what LeCun calls "the Deep-learning conspiracy." To prove this neural networks would be useful, they quietly developed ways to make them bigger, train them with larger dat A sets, and run them on more powerful computers. LeCun ' s handwriting recognition system had had five layers of neurons, but now they could has ten or many more. Around, what is now dubbed Deep learning started to beat established techniques on real-world tasks like sorting ima Ges. Microsoft, Google, and IBM added it to speech recognition systems. But neural networks were still alien to most researchers and not considered widely useful. In early-LeCun wrote a fiery letter-initially published anonymously-after a paper claiming to the set a new record O n A standard vision task is rejected by a leading conference. He accused the reviewers of being "clueless" and "negatively biased."

After LeCun2003 to New York University, he, Hinton and the third partner, Professor Yoshua Bengio of the University of Montreal, formed what LeCun called a "deep learning conspiracy" group. To prove that neural networks are useful, they are quietly growing, training them with larger datasets, and computing on more powerful computers. LeCun's handwriting recognition system has 5 layers of neurons, but now they have 10 or more designs. In about 2010 years, the technology now called Deep learning began to defeat traditional techniques in realistic tasks, compared to classification. Microsoft, Google, and IBM have applied it in speech recognition systems. But neural networks are still seen by many researchers as heterogeneous, and most people think it's useless. At the beginning of 2012, one of his papers claimed to have set a new record in the standard Vision task, but was rejected by a major meeting, when LeCun wrote a strongly worded letter that was initially anonymous, accusing the reviewers of "unfounded" and "negative bias".

Everything changed six months later. Hinton and grad students used a network like the one LeCun made for reading checks to rout the field in the leading CO Ntest for image recognition. Known as the ImageNet Large scale Visual recognition challenge, it asks software to identify $ types of objects as Div Erse as mosquito nets and mosques. The Toronto entry correctly identified the object in a image within five guesses about $ percent of the time, more than Points better than the second-best system (see innovator under Percentage Ilya sutskever, page 47). The Deep-learning software ' s initial layers of neurons optimized themselves for finding simple things like edges and Corne RS, with the layers after then looking for successively more complex features like basic shapes and, eventually, dogs or P Eople.

Six months later, everything changed. Hinton and two graduate students used LeCun to read the network in the check system, defeating others in the main game of image recognition, which is the imagenet large-scale visual identity challenge. The game requires algorithms to identify 1000 different kinds of objects, from mosquitoes to mosques. The Toronto team correctly identified the 5 objects required in the image, using 85% of the time, 10% more than the second system (see Innovator under Ilya sutskever,47 page). The optimization of the initial layer neurons in the deep learning algorithm is mainly to find simple features such as edges and corners, and the back layer is mainly to look for more complex features such as basic shapes, and finally the target of dogs or people.

LeCun recalls seeing the community that had mostly ignored neural networks packs into the the-the-the-the-the-the-the-the-winners A paper on their results. "You could see right there a lot of senior people in the community just flipped," he says. "They said, ' Okay, now we buy it. That ' s it, Now-you won. ' "

LeCun recalled that groups that had neglected neural networks all gathered in a room, and the winning team in the room published the results of the paper. "You can see there are a lot of senior people in the group who are crazy, they say, ' Well, now we recognize it, and now you win, '" he said.

Journey of Acceptance

1956:psychologist Frank Rosenblatt uses theories about what brain cells work to design the PERCEP Tron, an artificial neural network, the can is trained to categorize simple shapes.

1969:ai pioneers Marvin Minsky and Seymour Papert write a book critical of Perceptrons the quashes in Terest in neural networks for decades.

1986:yann LeCun and Geof Hinton perfect backpropagation to train neural networks this pass data throug H successive layers of artificial neurons, allowing them to learn more complex skills.

1987:terry Sejnowski at Johns Hopkins University creates a system called Nettalk so can be tra ined to pronounce text, going from the random babbling to recognizable speech.

1990:at Bell Labs, LeCun uses backpropagation to train a network that can read Andwritten text. T later uses it in machines that can read checks.

1995:bell Labs mathematician Vladimir Vapnik publishes an alternative method for training software to Categorize data such as images. This sidelines neural networks again.

2006:hinton ' s group at the University of Toronto develops ways to train much larger net Works with tens of layers of artificial neurons.

June 2012:google uses deep learning to cut the error rate of it speech recognition software by Cent.

October 2012:hinton and both colleagues from the University of Toronto win the largest challenge Ftware that recognizes objects in photos, almost halving the previous error rate.

March 2013:google buys DNN, the company founded by the Toronto team to develop their ideas. Hinton starts working at Google.

March 2014:facebook starts using deep learning to power their facial recognition feature, which identifies people in upload Ed photos.

May 2015:google Photos launches. The service uses deep learning to group photos of the same people and let's search your snapshots using terms like "beac H "or" dog. "

The process of accepting neural networks

1956: Psychologist Frank Rosenblatt The theory of how the brain cells work to design the perceptron, an artificial neural network that can be trained to classify simple shapes.

1969: The pioneer of artificial intelligence Marvin Minsky and Seymour Papert wrote a book criticizing the Perceptron model, which has suppressed research interest in neural networks for decades.

1986: Yann LeCun and Geoff Hinton perfected the reverse propagation algorithm, which can be trained in multi-layered neural networks to learn more complex skills.

1987: Terry Sejnowski of Hopkins University has developed a nettalk system that is trained to pronounce, from random pronunciation to recognizable speech.

1990: In Bell Labs, LeCun uses a reverse propagation algorithm to train a network that can read handwritten words. AT/t later developed a machine that could read checks using this algorithm.

1995: The Bell Lab mathematician Vladimir Vapnik published an alternative way to train software to classify data such as images. This causes the neural network to be marginalised again.

2006: The Hinton research team at the University of Toronto developed a method for training large neural networks with dozens of-layer neurons.

2012.06:google reduced the error rate of its speech recognition software by 25% with deep learning techniques.

2012.10:hinton and two colleagues at the University of Toronto won the largest image target recognition game, reducing the original error rate by half.

2013.03:google acquired a company founded by DNN, the Toronto team, and Hinton started working on Google.

2014.03:facebook began using deep learning techniques to enhance its face recognition features and to recognize faces from people uploading images.

2015.05: Google Photos was released. This service uses deep learning to classify images of the same person, and you can also use "beach" or "dog" to search all your pictures.

Academics working on computer vision quickly abandoned their old methods, and deep learning suddenly became one of the MA In strands in artificial intelligence. Google bought a company founded by Hinton and the both others behind the result, and Hinton started working there part Time on a team known as Google Brain. Microsoft and other companies created new projects to investigate deep learning. In December, Facebook CEO Mark Zuckerberg stunned academics by showing up at the largest neural-network the Conf Erence, hosting a party where he announced that LeCun is starting FAIR (though he still works at NYU one day a week).

Computer vision Researchers quickly abandoned the old-fashioned approach, and deep learning suddenly became one of the main lines of AI. Google bought Hinton and 2 other colleagues because of the 2012 competition, so Hinton started working part-time at a research group called Google Brain. Microsoft and other companies have also formed new projects to study deep learning. In December 2013, Facebook CEO Zuckerberg, who was shocked by academics at the biggest neural-networking research conference, presided over a gathering to announce that LeCun started working at fair (he still works one day a week at New York University).

LeCun still harbors mixed feelings about the the-the-brought-the-world around-to-his-point of view. "To some extent this should has come out of my lab," he says. Hinton shares that assessment. "It is a bit unfortunate for Yann so he wasn ' t the one who actually made the breakthrough system," he says. LeCun's group had done more work than anyone else to prove out of the techniques used to win the ImageNet challenge. The victory could has been he had student graduation schedules and other commitments not prevented he own group from TA King on ImageNet, he says. LeCun ' s hunt for deep Learning's next breakthrough is now a chance to even the score.

LeCun has a complex relationship with the 2012 study, which focused the world on his point of view. "To a certain extent, this should be the work of my lab," he said. Hinton also agreed with the assessment, saying: "It's a little unfortunate for Yann to make this breakthrough without his own hands." LeCun's team did most of the work to prove the effectiveness of winning the Imagenet Challenge algorithm. He said that if it were not for his students ' graduation plans and other commitments, which prevented his group from participating in the imagenet, the victory should be his. LeCun's quest for deep Learning's next breakthrough is an opportunity to get the score.

Teaching machines to understand us let the machine understand the history of our two deep learning

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.