The radial basis function (RBF) method of multivariable interpolation (Powell) was proposed in 1985. 1988 Moody and darken a neural network structure, RBF neural network, which belongs to the Feedforward neural network, can approximate any continuous function with arbitrary precision, especially suitable for solving the classification problem.

The structure of RBF network is similar to multilayer forward network, it is a three-layer forward network. The input layer is composed of a signal source node. The second layer is the hidden layer, the number of hidden elements depends on the need of the described problem, the transform function of the implicit unit is RBF, it is a nonnegative nonlinear function of the radial symmetry and attenuation of the center point, and the third layer is the output layer, which makes corresponding to the function of the input mode. The transformation from the input space to the hidden layer space is non-linear, and the spatial transformation from the hidden layer space to the output layer is linear.

The basic idea of RBF network is to use RBF as the "base" of hidden unit to form hidden layer space, so that the input vector can be mapped directly into the hidden space without the need of right connection. When the center point of the RBF is determined, the mapping relationship is determined. The mapping of the hidden layer space to the output space is linear, that is, the output of the network is the linear weighted sum of the implicit unit output, and the right here is the network tunable parameter. Generally speaking, the network's mapping from input to output is non-linear, but the network output is linear for the adjustable parameter. In this way, the network's mapping from input to output is non-linear, while the network output is linear to the tunable parameter. Thus the power of the network can be directly solved by the linear equations, thus greatly speeding up the learning speed and avoiding the local minimum problem. **RBF Neural network model**

The activation function of radial basis neural network is based on radial basis function, which is usually defined as monotone function of Euclidean distance between any point of space to a center. The activation function of the radial basis neural network is \vert as the ∥dist∥\vert of the distance between the input vector and the weight vector dist as the independent variable. activation function of radial neural network The general expression is R (∥dist∥) =e−∥dist∥2 R (\vert dist \vert) = E^{-\vert Dist \vert^2}

With the decrease of the distance between the weights and the input vectors, the output of the network is incremented, and when the input vector and the weight vector are the same, the neuron outputs 1. B is a threshold for adjusting the sensitivity of neurons. The generalized recurrent neural networks can be established by using radial basis neurons and linear neurons, and the neural networks are suitable for the application of function approximation. Radial basis neurons and competitive neurons can be component probabilistic neural networks, which are suitable for solving classification problems. The output layer and the hidden layer have different tasks, so their learning strategies are not the same. The output layer adjusts the linear weight and adopts the linear optimization strategy, so the learning speed is faster. The implicit function is to adjust the parameters of activation function (green function or Gauss function, general Gaussian function), and adopt Non-linear optimization strategy, so the learning speed is slow.

Although the output of RBF network is linear weighting of the implicit unit output and the learning speed is accelerated, it does not mean that the radial basis neural network can replace the other feedforward networks. This is because radial neural networks are likely to require much more hidden layers of neurons than BP networks to do their job. **RBF Network Learning Algorithm**

The RBF neural network learning algorithm needs to be solved with 3 parameters: the center of the base function, the variance, and the weight of the hidden layer to the output layer. According to the different methods of radial basis function Center selection, RBF Network has many learning methods. The following is an introduction to the RBF neural Network learning method for the self Organizing Selection center. This method consists of two phases:

1. Self-Organization learning stage, this stage is unsupervised learning process, to solve the hidden layer of the base function of the center and variance;

2. Monitoring the learning phase, this phase to solve the hidden layer to the output layer of the weight value.

The radial basis functions commonly used in radial basis neural networks are Gaussian functions, so the activation function of radial basis neural networks can be expressed as: R (XP−CI) =exp (−12σ2∥xp−ci∥2) r (x_p-c_i) = exp (-\frac{1}{2\sigma^2}\vert x_ P-c_i \vert^2)

Thus, the structure of the radial basis neural network can obtain the output of the network as follows: Yj=∑i=1hwijexp (−12σ2∥xp−ci∥2) j=1,2,⋯,n Y_j = \sum^h_{i=1}w_{ij}exp (-\frac{1}{2\sigma^2}\ Vert x_p-c_i \vert^2) \ j = 1,2,\cdots, n

Where XP x_p is a P-input sample. H is the number of nodes in the hidden layer.

If D is the expected output value of the sample, then the variance of the base function can be expressed as: Σ=1p∑jm∥dj−yjci∥