Deep Learning (depth learning) Learning Notes finishing Series (i)

Source: Internet
Author: User
Tags dnn

Deep Learning (depth learning) Learning notes finishing Series

[Email protected]

Http://blog.csdn.net/zouxy09

Zouxy

Version 1.0 2013-04-08

Statement:

1) The Deep Learning Learning Series is a collection of information from the online very big Daniel and the machine learning experts selfless dedication. Please refer to the references for specific information. Specific version statements are also referenced in the original literature.

2) This article is for academic exchange only, non-commercial. So each part of the specific reference does not correspond in detail. If a division accidentally violated the interests of everyone, but also look haihan, and contact bloggers deleted.

3) I Caishuxueqian, finishing summary of the time is inevitable error, but also hope that the predecessors, thank you.

4) Reading this article requires machine learning, computer vision, neural network and so on (if not, it doesn't matter, no see, can read, hehe).

5) This is the first version, if there are errors, you need to continue to amend and delete. Also hope that we have a lot of advice. We all share a little, together for the promotion of the Motherland Scientific research (hehe, good noble goal ah). Please contact: [Email protected]

Directory:

I. Overview

Second, the background

III. visual mechanism of human brain

Iv. about Features

4.1, the granularity of the characteristic representation

4.2. Primary (shallow) feature representation

4.3, structural characteristics of the expression

4.4. How many features are needed?

The basic thought of deep learning

Vi. Shallow learning (shallow learning) and deep learning (Deepin learning)

Seven, deep learning and neural Network

Eight, deep learning training process

8.1. Training methods of traditional neural networks

8.2. Deep Learning Training Process

Common models or methods of deep learning

9.1, Autoencoder Automatic encoder

9.2, Sparse coding sparse coding

9.3. Restricted Boltzmann Machine (RBM) restricts the Boltzmann machines

9.4, deep Beliefnetworks convinced that the degree of network

9.5. Convolutional Neural Networks convolutional neural network

Ten, summary and Prospect

Xi. bibliography and deep Learning learning resources

I. Overview

Artificial Intelligence, also known as AI, is one of the best dreams of human beings, just like immortality and interstellar roaming. Although computer technology has made great strides, but so far, no computer can produce "ego" consciousness. Yes, with the help of humans and a lot of ready-made data, the computer can be very powerful, but without it, it can't even tell the difference between a meow and a star.

Turing (Turing, we all know it.) The fathers of computers and artificial intelligence, respectively, corresponding to their famous "Turing machine" and "Turing Test", put forward the idea of Turing Test in the 1950 paper, that is, the wall dialogue, you will not know to talk to you, is a person or a computer. This undoubtedly gives computers, especially artificial intelligence, a high expectation of anticipation. But half a century later, the progress of artificial intelligence is far from reaching the standard of Turing test. This not only let the people who have been waiting for years, frustrated, think that artificial intelligence is a flicker, related fields are "pseudoscience".

But since 2006, the field of machine learning has made breakthrough progress. Turing test, at least not so elusive. As for technology, it relies not only on the parallel processing capability of cloud computing for big data, but also on algorithms. The algorithm is the deep learning. With the help of the Deep learning algorithm, mankind has finally found a way to deal with the ancient problem of "abstract concept".

In June 2012, the New York Times disclosed the Google Brain Project, attracting wide public attention. This project was led by the renowned Stanford University machine learning Professor Andrew Ng and the world's leading expert on large-scale computer systems, Jeffdean, with 16,000 CPU core parallel computing platforms called "Deep Neural Networks" (dnn,deep Neural Networks) machine learning model (there are 1 billion internal nodes.) This network is naturally not comparable to human neural networks. You know, there are 150多亿个 neurons in the human brain, and the interconnected nodes are the number of synapses, such as the galactic sands. It has been estimated that if the axons and dendrites of all neurons in a person's brain are connected in turn and pulled into a straight line that can be connected from Earth to the moon and back to Earth from the moon, it has been a great success in areas such as speech recognition and image recognition.

Andrew, one of the project leaders, said: "We do not frame the boundaries as we normally do, but instead directly put massive amounts of data into algorithms that let the data speak for itself and the system automatically learns from the data." "We never told the machine when we were training: ' This is a cat," said another manager, Jeff. The system actually invented or understood the concept of "cat". ”

In November 2012, Microsoft demonstrated a fully automated simultaneous interpretation system in an event in Tianjin, China, where speakers spoke in English, and computer one go in the background automatically completed speech recognition, English-Chinese machine translation and speech synthesis in English, and the results were very smooth. It is reported that the key techniques behind support are DNN, or deep learning (dl,deeplearning).

January 2013, at the Baidu Annual meeting, founder and CEO Robin Li High-profile announced to set up Baidu Research Institute, the first of which was established is "Deep Learning Institute" (Idl,institue of Learning).

Why Internet companies with big data are scrambling to devote a lot of resources to deep-learning technologies. It sounds like deeplearning is a cow. So what is deep learning? Why is deep learning? How did it come about? What can I do? What are the difficulties at present? The simple answers to these questions need to be slow. Let's take a look at the background of machine learning (the core of AI).

Second, the background

Machine Learning (learning) is a discipline that specializes in how computers simulate or implement human learning behaviors in order to acquire new knowledge or skills and reorganize existing knowledge structures to continually improve their performance. Can machines have the ability to learn as humans do? In 1959 Samuel (Samuel) designed a chess program that had the ability to learn and to improve his chess skills in a constant game of chess. 4 years later, the program defeated the designer himself. Over the past 3 years, the program has beaten the United States to a 8-year-old undefeated champion. This program to show people the ability of machine learning, put forward a lot of thoughtful social problems and philosophical issues (hehe, the normal trajectory of artificial intelligence is not a great development, these philosophical and ethical development is very fast. What future machines are more and more like people, people more and more like machines ah. What machine will be anti-human ah, ATM is the first shot Ah, and so on. The human mind is endless.

Although machine learning has developed for several decades, there are still many problems that are not well solved:

Examples of image recognition, speech recognition, natural language understanding, weather prediction, gene expression, content recommendation, and so on. So far we have been thinking of solving these problems through machine learning (visual perception as an example):

The data is obtained from the beginning through the sensor (e.g. CMOS). Then through preprocessing, feature extraction, feature selection, and inference, prediction or recognition. The last part, the machine learning part, most of the work is done in this area, there are a lot of paper and research.

In the middle of the three parts, summed up is the characteristics of expression. Good feature expression plays a key role in the accuracy of the final algorithm, and the main calculation and testing work of the system is consumed in this part. However, this piece of practice is generally done manually. By manual extraction of features.

Up to now, there have been many characteristics of NB (good characteristics should have invariance (size, scale and rotation) and distinguishable): for example, the emergence of sift, is a local image feature description sub-research area of a milestone work. Because of the sift of the scale, rotation, and a certain angle of view and changes in the illumination of the image are invariant, and sift has a strong distinction, it is true that many of the problems of the solution to become possible. But it's not everything.

However, the manual selection of features is a very laborious, heuristic (requires expertise) approach, can be selected to a great extent by experience and luck, and its adjustment takes a lot of time. Since the manual selection of features is not very good, then can you automatically learn some features? The answer is YES! Deep learning is used to do this thing, see it's an alias Unsupervisedfeature learning, it can be as the name implies, unsupervised means not to participate in the selection process of character.

So how does it learn? How do you know which features are good and which are not? We say machine learning is a discipline that specializes in how computers simulate or implement human learning behavior. Okay, so how does our human vision system work? Why in the boundless crowd, mortal beings, in the red dust we can find another she (because, you exist in my deep mind, my dream in my Heart My song ...) )。 Human brain so NB, can we refer to the human brain, imitate the human brain? (It seems that the characteristics of the relationship with the human brain, algorithm ah, are good, but do not know is not artificially imposed, in order to make their own works become sacred and elegant.) )

In recent decades, the development of cognitive neuroscience, biology and other disciplines has made us no longer so unfamiliar with our mysterious and magical brain. Also give impetus to the development of artificial intelligence.

III. visual mechanism of human brain

The 1981 Nobel Prize for Medicine was awarded to David Hubel (the American biologist born in Canada) and Torstenwiesel, and Roger Sperry. The main contribution of the first two is "the discovery of information processing in the visual system": The visual cortex is graded:

Let's see what they've done. In 1958, Davidhubel and Torsten Wiesel in Johnhopkins University, studying the correspondence between pupil area and cerebral cortex neurons. They opened a small 3-millimeter hole in the head of the cat's skull and inserted electrodes into the hole to measure how active the neurons were.

Then, in front of the kitten, they showed objects of various shapes and brightness. And, in presenting each object, it also changes the position and angle of the object placement. They expect the cat's pupils to experience different types of stimuli, different strengths and weaknesses, through this approach.

The reason for doing this experiment is to prove a guess. There is some correspondence between the different visual neurons in the cerebral cortex and the stimulation of the pupils. Once the pupil is stimulated by a certain part of the brain, some neurons in the cortex become active. After many days of tedious experimentation, and at the expense of several poor kittens, David Hubel and Torsten Wiesel discovered a neuron cell called the Directional Selective cell (Orientation selective cell). When the pupil finds the edge of the object in front of it, and the edge points in a certain direction, the neuron cells become active.

This discovery has stimulated further thinking about the nervous system. Neural-central-the working process of the brain may be a continuous iterative, abstract process.

There are two key words here, one is abstract, the other is iteration. From the original signal, do low-level abstraction, gradually to the advanced abstract iteration. The logic of human thinking often uses highly abstract concepts.

For example, starting with a raw signal intake (pupil intake pixel Pixels), then doing preliminary treatment (some cells in the cerebral cortex find the edge and direction), then abstract (the brain determines that the shape of the object in front of it is rounded), and then further abstraction (the brain further determines that the object is a balloon).

This physiological discovery has contributed to the groundbreaking development of computer artificial intelligence in 40 years.

In general, the information processing of human visual system is graded. From the low-level V1 area to extract the edge characteristics, and then to the shape of the V2 area or the target part, and then to the higher level, the whole goal, target behavior. That is to say, the high-level feature is a combination of low-level features, from low-rise to higher-layer features more abstract, more and more expressive semantics or intentions. The higher the level of abstraction, the less likely there is to speculate, and the better the classification. For example, the correspondence between the word set and the sentence is many-to-one, and the correspondence between the sentence and the semantics is many-to-one, and the correspondence between semantics and intention is many-to-one, which is a hierarchical system.

Sensitive people notice the key words: layering. And deep learning depth is not that I exist how many layers, that is, how much? That's right. How did deep learning learn from this process? After all, it is attributed to the computer to deal with, the face of a problem is how to model this process?

Because we want to learn the characteristics of the expression, then about the characteristics, or about this level of characteristics, we need to understand more in-depth point. So before we say deep learning, we need to re-talk about the characteristics (hehe, actually see so good interpretation of the characteristics, not put here a little pity, so it was stuffed here).

Continuation of

Deep Learning (depth learning) Learning Notes finishing Series (i)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.