Self-organizing neural network model and learning algorithm __ Neural network

Source: Internet
Author: User

Self-organizing neural network, also known as self-organizing competitive neural network, is especially suitable for solving the problem of pattern classification and recognition. The network model belongs to the Feedforward neural network model, using unsupervised learning algorithm, the basic idea of the work is to let each neuron of competition layer match the input mode, and finally only one neuron becomes the winner of competition, the output of this winning neuron represents the classification of input mode.

The commonly used self-organizing competitive neural networks include adaptive resonance theory (ART) networks, self-organizing feature Mapping (SOM) networks, SNN (CP) networks, and cooperative neural networks (such as the other). Learning algorithm of self-organizing feature Map Network

Kohonen self-organizing feature mapping algorithm. can automatically find the similarity between the input data, the similar input on the network near the configuration, so is a can constitute a selective response to input data network. The steps of its learning algorithm are as follows: Network initialization using random numbers to set the input layer and the value of the weights between the mapping layer input vector inputs to input vectors x= (x1,x2,⋯,xn) T x = (x_1, x_2, \cdots, x_n) ^t input to the input layer to compute the mapping layer weights vector and input to The distance of the quantity is in the mapping layer, and the Euclidean distance between the weights and the input vectors of each neuron is computed. Here, the distance between the first J neuron of the mapping layer and the input vector is: d=∑i=1n (xi−wij) 2−−−−−−−−−−−√d = \sqrt{\sum^n_{i=1} (X_i-w_{ij}) ^2}, Wij W_{ij} is the weight value between the I neuron of the input layer and the J neuron of the mapping layer. Select the neuron with the least distance of the weight vector and select the neuron, such as the DJ D_j, which is the smallest of the distance between the input vector and the weight vector, and call it the winning neuron, and write it as a j∗j^* and give a set of neighboring neurons. The weight of the learning of the winning neurons and the weights at their critical neurons, the following type of update δwij=ηh (j,j∗) (Xi−wij) \delta W_{ij} = \eta h (j, j^*) (X_i-w_{ij}) where Η\eta is a constant greater than 0 less than 1. H (j,j∗) =exp (−|j−j∗|2σ2) h (j,j^*) = exp (-\frac{\vert j-j^* \vert^2}{\sigma^2}) σ2 \sigma^2 will decrease as learning progresses, so H (j

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.