In 1987, American scholar Robert Hecht-nielsen proposed a dual propagation neural network model (Counter propagation NETWORK,CPN), which was first used to implement sample selection matching systems. CPN can store binary or analog values of the mode pair, so this network model can also be used for associative storage, pattern classification, function approximation, statistical analysis and data compression and other uses.
1. Network structure and operating principle
Network structure, the full interconnection of neurons between the layers. From the topological structure, the CPN network is similar to the three-layer BP network, but in fact the CPN is composed of the self-organized network and the Grossberg network. The hidden layer is the competition layer, using the non-tutor's competitive learning rule, and the output layer is Grossberg layer, using the Widrow-hoff rule or Grossberg rule with tutor signal to learn.
After each layer of the network is trained in two learning rules, the running phase first feeds the input variable into the network, the hidden layer competes for these inputs, the winner becomes the representative of the current input pattern class, and the neuron becomes the active neuron as shown in (a), the output value is 1 and the remaining neurons are inactive, and the output value is 0. The competing implicit neurons excite the output-layer neurons, causing them to produce the output pattern shown in (b). The output of the neuron is 0 and does not participate in the integration of the output layer due to the failure of the competition. So the output is determined by the alien weights of the neurons that compete for victory.
2. Learning Algorithms
The network learning is divided into two stages: the first stage is the competitive learning algorithm to train the inner star weights of the hidden layer neurons, and the second stage is to train the outer star weight vectors of the hidden layers by using the alien learning algorithm.
Because the inner star weight vector is the competition learning rules, the previous several posts introduced the algorithm steps are basically similar, here does not introduce, it is worth explaining that the competition algorithm does not set the winning pro domain, only the winning neurons in the inner star weight vector adjustment.
The following highlights the training steps of the alien weights vector:
(1) input a mode and the corresponding expected input, calculate the net input of the network hidden node, the inner star weight vector of the hidden node adopts the training result in the previous stage.
(2) Determine the winning neuron to output to 1.
(3) Adjust the hidden layer to the output layer of the outer star weight vector, the adjustment rules are as follows:
The β is the alien rule learning rate, which is a annealing function that drops over time. O (t) is the output value of the output layer neuron.
The above rule shows that only the outer star weights of the winning neurons are adjusted so that the outer star weights are kept close to the desired output and the output is encoded into the outer star weight vector.
3. Improved CPN network(1) Double winning neuron CPN
Refers to the operation phase after completion of the training allows two neurons in the hidden layer to compete to win, both of the winning neurons are valued at 1, the other neurons value is 0. So there are two winning neurons that affect the output of the network at the same time. An example is given to show that the CPN network can linearly overlay the output of all the training samples contained in the composite input pattern, which is suitable for applications such as image overlay.
(2) Two-way CPN network
The input and output layers of the CPN network are divided into two groups, as shown in. The advantage of two-way CPN is that you can learn two functions at the same time, for example:Y=f (x);x= F (Y)
When two functions are mutually inverse, there is x =x,y =y. Two-way CPN can be used for data compression and decompression, one of the function f as a compression function, and its inverse function g as the decompression function.
In fact, the bidirectional CPN does not require two reciprocal functions to be parsed, and the more general case is that f and g are reciprocal mappings so that the interconnection can be realized using two-way CPN.
4. CPN-NET application
In this paper, the use of CPN in the color pattern classification of tobacco leaves is given, the input sample is distributed in the three-dimensional color space shown in (a), each point of the space is represented by a three-dimensional vector, each component represents the average tone h of the tobacco leaf, the average luminance L and the average saturation s. You can see that the color pattern is divided into 4 categories, which correspond to red-brown, orange, lemon and cyan. (b) The CPN network structure is given, with 10 neurons in the hidden layer, 4 neurons in the output layer, a function of learning rate decreasing with the training time, and after 2000 recursion, the correct rate of network classification reaches 96%.
******************* ******************
2015-8-15
Less art
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Dual propagation Neural Network (CPN)