Self-organizing neural network, also known as self-organizing competitive neural network, is especially suitable for solving the problem of pattern classification and recognition. The network model belongs to the Feedforward neural network model, using unsupervised learning algorithm, the basic idea of the work is to let each neuron of competition layer match the input mode, and finally only one neuron becomes the winner of competition, the output of this winning neuron represents the classification of input mode.
The commonly used self-organizing competitive neural networks include adaptive resonance theory (ART) networks, self-organizing feature Mapping (SOM) networks, SNN (CP) networks, and cooperative neural networks (such as the other). Learning algorithm of self-organizing feature Map Network
Kohonen self-organizing feature mapping algorithm. can automatically find the similarity between the input data, the similar input on the network near the configuration, so is a can constitute a selective response to input data network. The steps of its learning algorithm are as follows: Network initialization using random numbers to set the input layer and the value of the weights between the mapping layer input vector inputs to input vectors x= (x1,x2,⋯,xn) T x = (x_1, x_2, \cdots, x_n) ^t input to the input layer to compute the mapping layer weights vector and input to The distance of the quantity is in the mapping layer, and the Euclidean distance between the weights and the input vectors of each neuron is computed. Here, the distance between the first J neuron of the mapping layer and the input vector is: d=∑i=1n (xi−wij) 2−−−−−−−−−−−√d = \sqrt{\sum^n_{i=1} (X_i-w_{ij}) ^2}, Wij W_{ij} is the weight value between the I neuron of the input layer and the J neuron of the mapping layer. Select the neuron with the least distance of the weight vector and select the neuron, such as the DJ D_j, which is the smallest of the distance between the input vector and the weight vector, and call it the winning neuron, and write it as a j∗j^* and give a set of neighboring neurons. The weight of the learning of the winning neurons and the weights at their critical neurons, the following type of update δwij=ηh (j,j∗) (Xi−wij) \delta W_{ij} = \eta h (j, j^*) (X_i-w_{ij}) where Η\eta is a constant greater than 0 less than 1. H (j,j∗) =exp (−|j−j∗|2σ2) h (j,j^*) = exp (-\frac{\vert j-j^* \vert^2}{\sigma^2}) σ2 \sigma^2 will decrease as learning progresses, so H (j