All the current Ann neural network algorithm Daquan

Source: Internet
Author: User

All the current Ann neural network algorithm Daquan

Overview

1 BP Neural network

1.1 Main functions

1.2 Advantages and Limitations

2 RBF (radial basis function) neural network

2.1 Main functions

2.2 Advantages and Limitations

3 Sensor Neural Network

3.1 Main functions

3.2 Advantages and Limitations

4 Linear neural networks

4.1 Main functions

4.2 Advantages and Limitations

5 self-organizing neural networks

5.1 Self-organizing competitive network

5.2 Self-organizing feature Map Network

5.3 Learning Vector Quantization network

5.4 Main functions

5.5 Advantages and Limitations

6 Feedback Neural Network

6.1 Elman Neural Network

6.2 Hopfield Network

6.3 Main applications

6.4 Advantages and Limitations

7 Other (deep learning, etc.)

Overview

This paper mainly introduces the current common neural networks, the main use of these neural networks, as well as the advantages and limitations of various neural networks.

1 BP Neural network

BP (back propagation) neural network is a neural network learning algorithm. The layered neural network consists of input layer, middle layer and output layer, and the middle layer can be expanded to multi-layer. Each neuron in the adjacent layer is connected to each other, and there is no connection between the neurons in each layer, and the network is taught by teachers, and when a pair of learning modes are provided to the network, each neuron gets the input response of the network to generate the connection weights (Weight). Then according to reduce the desired output and the actual output error direction, from the output layer through each middle layer to modify the connection right, back to the input layer. This process is repeated alternately until the global error of the network tends to a given minimum, that is, the process of learning.

Determination of the initial weight threshold: So the initial value of weights and thresholds should be selected as the fractional experience of uniform distribution.

Value, approximately ( -2.4/f~2.4/f), where F is the number of input layer nodes of the connected cell

1.1 Main functions

(1) Function approximation: Train a network with input vectors and corresponding output vectors to approximate a function.

(2) Pattern recognition: Connect it to the input vector with a pending output vector.

(3) Classification: Classify the appropriate methods defined by the input vectors.

(4) Data compression: Reduce the number of output vector dimensions for transmission or storage.

1.2 Advantages and Limitations

The main advantage of BP neural network is that it has very strong nonlinear mapping ability. Theoretically, for a three-and three-layer BP network, the network can approximate a nonlinear function with arbitrary precision as long as the number of hidden-layer neurons is sufficient. Secondly, BP neural network has the ability to associate memory with external stimuli and input information. This is because it uses the distributed and parallel information processing method, the extraction must adopt the associative way, in order to transfer the relevant neurons all up. The BP neural network can recover the original complete information from incomplete information and noise disturbance by self-adapting training by pre-storing information and learning mechanism. This kind of ability makes it have important application in image restoration, language processing, pattern recognition and so on. Thirdly, the BP neural network has a strong ability to identify and classify the external input samples. Because of its powerful nonlinear processing ability, the nonlinear classification can be better carried out, which solves the problem of nonlinear classification in the history of neural network. In addition, the BP neural network has the optimized computing ability. BP neural network is essentially a nonlinear optimization problem, it can find a set of parameter combinations under the known constraint conditions, so that the objective function determined by this combination is minimized. However, there is a local minimum problem in the optimization calculation, which must be perfected by improvement.

Because the stability of BP network training requires little learning efficiency, the gradient descent method makes training slow. Momentum method because the learning rate is usually faster than the simple gradient descent method, but in the actual application is still not enough speed, these two methods are usually only used for incremental training.

Multi-layer neural network can be applied in linear and nonlinear systems, and the approximation of arbitrary function is simulated. Of course, perceptron and linear neural networks can solve this kind of network problem. However, although theoretically feasible, the BP network does not always have a solution.

For nonlinear systems, choosing the right learning rate is an important problem. In a linear network, the training process is unstable due to the learning rate. Conversely, a small learning rate can lead to too much training time. Unlike linear networks, it is difficult to choose a good learning rate for nonlinear multilayer networks. For those fast-training algorithms, the default parameter values are basically the most effective settings.

The error surface of nonlinear network is much more complicated than the error plane of linear network, the problem is that the nonlinear transfer function in multilayer network has multiple local optimal solutions. The process of optimization is very important to the selection of the initial point, if the initial point is closer to the local optimal point rather than the global best, it will not get the correct result, which is one of the reasons why multilayer networks cannot get the optimal solution. In order to solve this problem, in the actual training process, we should select multiple initial points for training to ensure the global optimality of the training results.

The number of network hidden neurons also has a certain influence on the network. The number of neurons is too little to cause the network discomfort, and the number of neurons too much can cause network adaptability.

2 RBF (radial basis function) neural network

Radial basis functions (rbf-radial Basis function) Neural network is a neural network proposed by J.moody and C.darken in the late 80, which is a three-layer Feedforward network with a single hidden layer. Because it simulates the neural network structure of locally adjusting and covering the receiving domain (or-receptive field) in the human brain, the RBF network is a kind of local approximation network, which can approximate any continuous function with arbitrary precision, which is especially suitable for solving the classification problem.

2.1 Main functions

Image processing, speech recognition, time series prediction, radar origin localization, medical diagnosis, error handling detection, pattern recognition, etc. The most use of RBF network is for classification, in classification, the widest or pattern recognition problem, followed by time series analysis problem.

2.2 Advantages and Limitations

(a) Advantages:

The neural network has a strong nonlinear fitting ability, can map arbitrarily complex nonlinear relations, and the learning rules are simple and easy to be realized by computer. It has strong robustness, memory ability, non-linear mapping ability and strong self-learning ability, so it has a great application market.

① it has only the best approximation characteristics, and no local minimum problem exists.

②RBF Neural Network has strong input and output mapping function, and the theory proves that RBF network is the best network to accomplish mapping function in forward network.

③ network connection Weights and outputs are linearly related.

④ Good classification ability.

⑤ learning process converges fast

(ii) Limitations:

① The most serious problem is the inability to explain the reasoning process and reasoning basis.

② can not ask the user the necessary questions, and when the data is insufficient, the neural network will not be able to work.

③ the characteristics of all problems into numbers, all the reasoning into a numerical calculation, the result is bound to lose information.

④ theory and learning algorithm still need to be further perfected and improved.

The center of the ⑤ hidden layer base function is selected in the input sample set, which in many cases is difficult to reflect the real input and output relationship of the system, and the initial center point is too many; In addition, the optimization process will appear data morbid phenomenon.

3 Sensor Neural Network

is a neural network with single-layer computational neurons, and the transfer function of the network is a linear threshold unit. The original Perceptron neural network has only one neuron. Mainly used to simulate the perceptual characteristics of human brain, because the threshold unit as a transfer function, so can only output two values, suitable for simple pattern classification problem. When the perceptron is used for two types of pattern classification, it is equivalent to separating two kinds of samples in high dimensional sample space with one super plane, but the single layer perceptron can only deal with the linear problem and is powerless for nonlinear or linear irreducible problems. Suppose P is an input vector, W is a weight matrix vector, B is a threshold vector, because its transfer function is a threshold unit, that is, the so-called hard limit amplitude function, then the decision-making boundary of the Perceptron is Wp+b, when wp+b>=0, the judgement Category 1, otherwise, the Class 2 is determined.

3.1 Main functions

Mainly used for classification.

3.2 Advantages and Limitations

The Perceptron model is simple and easy to implement, and the disadvantage is that it can only solve linear sub-problems. Solve the linear non-division problem approach: first, the use of multilayer Perceptron model, the second is to choose a more powerful neural network model.

4 Linear neural networks

Linear neural network is a relatively simple neural network, consisting of one or more linear neurons. A linear function is used as a transfer function, so the output can be any value. The linear neural network can adjust the weights and thresholds of the network using the Widrow-hoff learning rules based on the least squares LMS, and the linear neural network can only deal with the linear mapping relation of the response input and output sample vector space, and can only deal with the linear sub-problem. At present, the linear neural network is widely used in function fitting, signal filtering, prediction, control and so on. Linear neural Network and Perceptron network, its transfer function is a linear function, the input and output is a simple pure proportional relationship, and the number of neurons can be multiple. The linear neural network of only one neuron is different from the Perceptron only in the transfer function, the former is the transfer function of the linear function, and the latter is the transfer function of the threshold unit.

4.1 Main functions

(1) Linear prediction;

(2) Adaptive filtering noise cancellation;

(3) Adaptive filtering system identification;

4.2 Advantages and Limitations

Linear neural network can only reflect the linear mapping relationship between input and output sample vector space. Because the error surface of the linear neural network is a multidimensional parabolic surface, it is always possible to find an optimal solution for the neural network training based on the least squares gradient descent principle when the learning rate is small enough. However, the training of linear neural networks cannot always reach 0 error. The training performance of linear neural network is limited by network scale and training set size. If the degree of freedom of the neural network (the total number of ownership values and thresholds) is less than the number of input-output vectors in the sample space, and the sample vectors are linearly independent, then the network cannot reach 0 error, and only a solution that minimizes the error of the network can be obtained. Conversely, if the degree of freedom of the network is greater than the number of sample sets, there will be infinitely many solutions that can make the network error zero.

In addition, there are some other restrictions on the situation of the system, the uncertain system and the linear correlation vector.

5 self-organizing neural networks

There is a characteristic sensitive cell in the biological nerve cell, which is only sensitive to a certain characteristic of the external signal stimulation, and this characteristic is formed by self-learning. In the cerebral cortex of the brain, the perception and treatment of external signal stimuli is partitioned, and some scholars believe that the adaptive development of the brain cortex is known as a sensitive region of different properties through the competing learning of neighboring neurons. According to this characteristic phenomenon, the Finnish scholar Kohonen proposed the self-organization characteristic mapping neural network model. He thinks that when a neural network accepts external input patterns, it will adaptively learn the characteristics of the input signals, and then self-organize into different regions, and have different response characteristics to the input patterns in each region. In the output space, these neurons will form a mapping map, the function of the same neurons in the map by the relatively close, the function of different neurons, the self-organizing feature Map network is also named.

The self-organizing mapping process is accomplished through competitive learning. The so-called competitive learning refers to the process in which the neurons of the same layer compete with each other to modify the connection weights. Competitive learning is a unsupervised learning method, in the learning process, only need to provide some learning samples to the network, without the need to provide the ideal target output, the network based on the characteristics of the input samples of self-organizing mapping, so that the sample automatically sorted and classified.

Self-organizing neural networks include self-organizing competitive networks, self-organizing feature mapping networks, learning vector quantization and other network structure forms.

5.1 Self-organizing competitive network

The structure of the competitive learning network: Assuming that the network input is r dimension, the output is s, the typical competitive learning network consists of the hidden layer and the competition layer, and compared with the RBF network model, the input of the competition transfer function is the same as the distance between the input vector p and the neuron weight vector w minus and the threshold vector B. That is ni=-| | wi-p| | +bi. The output of the network consists of the output of each neuron in the competition layer, except for the neurons that win in the competition, the output of the remainder is 0, and the neuron of the maximal element in the input vector of the competition transfer function is the winner of the competition, whose output is fixed at 1.

Competitive Learning Network training: Competitive Learning Network based on Kohonen learning rules and threshold learning rules training, competition network every step of learning, the weight vector and the current input vector closest to the neuron will win in the competition, the network according to the Kohonen criterion to adjust the weight of the neuron. Assuming that the first neuron in the competition layer wins, its weight vector Wi will be modified to: WI (k) =wi (k-1)-alpha* (P (k)-wi (k-1)). According to this rule, the modified neuron weight vector will be closer to the current input. After this adjustment, the next time the network input a similar vector, this neuron is likely to win the competition, if the input vector and the neuron's weight vector difference, the neuron is very likely to lose. As the training progresses, each node in the network will represent a class of near-similar vectors, and when the input of a certain class of vectors is accepted, the corresponding class of neurons will win the competition, thus the network will have the classification function.

5.2 Self-organizing feature Map Network

The structure of self-organizing feature Map Network SOFM is based on the imitation of the cortical layer of human brain. In the brain cortex, the perception and processing of external signal stimulation is partitioned, so self-organizing feature map networks not only have different responses to different signals, i.e. they have the same classification function as the competitive learning network. But also to achieve the same function of neurons in the spatial distribution of the aggregation. Therefore, in addition to adjusting the weights of the winning neurons in training, the self-organizing feature Map network should modify the weights of all neurons in the neighborhood of the winning neuron so that the similar neurons have the same function. The structure domain of self-organizing feature map Network The structure of the competitive learning network is identical, but the learning algorithm is different.

When stable, all nodes in each neighborhood have similar output to an input, and the probability distribution of the cluster is close to the probability distribution of the input pattern.

5.3 Learning Vector Quantization network

Learning Vector Quantization network consists of a competition layer and a linear layer, the role of the competition layer is still classified, but the competition layer first divides the input vectors into more granular subcategories, and then merges the classification results of the competing layers in the linear layer to form a user-defined target classification pattern. Therefore, the number of neurons in the linear layer must be less than the number of neural elements in the competition layer.

Learning Vector Quantization Network training: When the learning Vector quantization network is established, the connection weights matrix between the competition layer and the linear layer is determined. If a neuron of a competing layer corresponds to a class of neurons that belong to a linear layer of a quantum class, the connection weights between the two neurons are equal to 1, otherwise the connection weights between them are 0, so the weights matrix implements the combination of subcategories to target classes. According to this principle, each column of the connection weights matrix between the competition layer and the linear layer is 0, except one element is 1. 1 The position in this column represents the target category (each location in the column represents a category of target) for which a subcategory is determined by the competition layer. When a network is established, each class of data is known as a percentage of the total number of data, which is exactly the scale at which the competing neurons are merged into each output of the linear layer. Because the connection weights matrix between the competition layer and the linear layer is determined beforehand, it is necessary to adjust the weighting matrix of the competition layer in the network training.

5.4 Main functions

It is especially suitable for solving the problem of application of pattern classification and recognition.

5.5 Advantages and Limitations

The biggest advantage of SOFM network (self-organizing feature Map Network) is that the network output layer introduces the topological structure, which realizes the simulation of the competition process of the biological neural network.

LVQ Network (learning Vector quantization network) introduces supervised learning algorithm in the foundation mountain of competitive learning, which is considered as the extension form of SOFM algorithm.

The common method is to supplement the learning vector algorithm as self-organizing mapping algorithm, and apply the self-organizing map network structure with topological structure in the output layer, using self-organizing mapping learning algorithm and learning vector quantization algorithm to train the network two times.

6 Feedback Neural Network

The network mentioned above is forward network, there is another kind of network in practical application--feedback network. In the feedback network, the information can be transmitted in the forward direction at the same time, the feedback of this kind of information may occur between neurons in different network layers, or it can be confined to a certain layer of neuron. Because the feedback network belongs to the dynamic network, only by satisfying the stable condition, the network can reach the stable state after working for a period of time. The typical representative of the feedback network is the Elman network and the Hopfield network.

6.1 Elman Neural Network

Elman Network is composed of several hidden layers and output layers, and there are feedback links in the hidden layer, the hidden layer neurons adopt the tangent sigmoid function as the transfer function, the output layer neuron transfer function is pure linear function, when the hidden layer neuron is enough, The Elman network can guarantee the network to approximate any nonlinear function with arbitrary precision.

6.2 Hopfield Network

Hopfield network is mainly used for associative memory and optimization calculation. Associative memory refers to the network after the input of a vector, the network through feedback evolution, from the output of the network to get another vector, so that the output vector is called the network from the initial input vector Association to obtain a stable memory, that is, a balance point of the network. Optimization calculation is the time when there are multiple solutions to a problem, we can design an objective function and then seek the optimal solution to satisfy the folding chairs target. For example, in many cases the energy function can be regarded as the objective function, and the optimal solution needs to achieve the minimum value of the energy function, which is the stable equilibrium point of the so-called energy function. In short, the design of Hopfield network is in the initial input, so that the network after the feedback calculation, finally reached a stable state, the output is the user needs of the balance point.

6.3 Main applications

Elman network is mainly used for signal detection and prediction, and Hopfield network is mainly used in associative memory, clustering and optimization calculation.

6.4 Advantages and Limitations

(i) Hopfield neural network

For Hopfield Neural networks, there are the following problems:

(1) In the implementation of the specific neural network to ensure that the connection right matrix is symmetrical;

(2) in the actual neural network implementation, there will always be a delay in the transmission of information, these delays on the characteristics of the neural network has an impact.

(3) The scale problem in the realization of neural network, namely the integration problem.

(ii) Elman neural network

Elman Neural network model, like other neural network models, has the input layer, the hidden layer and the output layer, which has the learning period and the working period, so it has the characteristics of self-organization and self-learning. In addition, the feedback of hidden layer and output layer nodes is increased in the Elman neural network model, so the accuracy and fault tolerance of network learning are further enhanced.

The network model established by Elman Neural Network has good application foreground in other fields with nonlinear time series features, it has strong robustness, good generalization ability, strong universality and objectivity, and fully shows the superiority and rationality of neural network method. The use of neural network method in prediction and evaluation of other fields will have good practical value.

7. Other:

Some information has been collected, and there is another version that overlaps:

Anns refers to an algorithm that uses a first-or second-generation neuron model


Unsupervised Learning (cluster)

1. Other Clusters:
Som
Autoencoder
2, deep learning, divided into three categories, the method is completely different, even neurons are not the same
Feed forward Prediction: see 3
Feedback prediction: Stacked sparse Autoencoder (cluster), predictive coding (belong to RNN, cluster)
Interactive prediction: Deep belief net (DBN, genus Rnn, clustering + classification)
3. Feedforward Neural Network (classification)
Perceptron
Bp
Rbf
Feedforward Deep learning:convolutional Neural Network (CNN)
4, recurrent nn class
Hopfield
Boltzmann Machine and variants
Echo State Network
5, other engineering uses the neural network version of the algorithm, the number is too many, simple write a few
Intensive learning such as TD (Reinforcement Learning)
Pca

All the current Ann neural network algorithm Daquan

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.