Generalized regression neural network GRNN
(General Regression neural Network)
Generalized regression Neural network is an improvement based on radial basis function neural network.
Structural Analysis:
It can be seen that this structure is very similar to the radial basis neural network we have talked about before, the difference lies in the addition of a layer of layers, and the removal of the hidden layer and the output layer of the weight of the connection.
1. The input layer is a vector, the dimension is M, the number of samples is n, and the linear function is the transfer function.
2. The hidden layer and the input layer are all connected, there is no connection in the layer, the number of hidden layer neurons and the number of samples equal, that is, n, the transfer function is a radial basis function.
3. There are two nodes in the Add and layer, the first node is the output of each hidden layer node, and the second node is weighted sum of the expected results with each hidden layer node.
4. The output layer output is the second node divided by the first node. Theoretical Basis
The regression definition of the generalized regression neural network is different from that of the radial basis function to the Gaoshuan value of the least squares superposition, he uses the density function to predict the output.
Assuming that x, Y is two random variables, the joint probability density is f (x, y).
We get the following formula: (x0) =f (y*f (x0,y))/F (F (X0,y)). F represents the integral.
(x0) is y in the x0 condition in the predictive output. X0 is the observed value.
Now the unknown is f (x, y).
How to estimate the density function of the known value and the unknown distribution. Here we use Parzen nonparametric estimation method.
The window function is selected as a Gaussian window.
Get the following Y (x0) =f (Y*exp (d))/F (exp (-D)).
D represents the distance from the center, and exp (-D) is the output of the underlying layer of the radial basis function. Program Explanation
First, we set up a set of data just like a radial basis function neural network.
P=-9:1:8;
X=-9:.2:8;
T=[129,-32,-118,-138,-125,-97,-55,-23,-4,2,1,-31,-72,-121,-142,-174,-155,-77];
1 2 3 4 1 2 3 4
P,t respectively represents the input and output, X for the test sample, and the radial basis function neural network is different, these data need to be entered together at the time of creation, and do not need a similar diameter basis function that weight training, GRNN does not exist weights, so the network can not be saved, use the time directly to fit.
And just because GRNN does not have the weight of this said, so the advantage of not training is reflected in his speed really fast. And the curve fitting is very natural. The accuracy of the sample is less accurate than that of the radial base, but it has even surpassed BP in actual testing.
Next is the processing of the hidden layer, different from the RBF, the hidden layer does not have a constant 1 of the vector output is added.
spread=1;
Chdis=dist (x ', p);
Chgdis=exp (-chdis.^2/spread);
Chgdis=chgdis ';
1 2 3 4 5 1 2 3 4 5
Finally, add and layer and output layer, here we bashi the son to write together.
y=t*chgdis./(sum (Chgdis));
1 1
look at the output effect
Compare the weights neural network trained by radial basis function
Concluding remarks
Although GRNN does not appear to be accurate in radial basis, it has a great advantage in classifying and fitting, especially when the accuracy of data is poor.