BP neural network model and Learning algorithm _ neural network

Source: Internet
Author: User

In the Perceptron neural network model and the linear Neural network model learning algorithm, the difference between the ideal output and the actual output is used to estimate the neuron connection weight error. It is a difficult problem to estimate the error of hidden layer neurons in network after the introduction of multilevel networks to solve the linear irreducible problem. Because in practice, it is impossible to know the ideal output value of any neuron in the hidden layer. In the 1985, Rumelhart and McClelland proposed the BP neural network error back propagation (BP) learning algorithm, and realized the Minsky model.

BP algorithm is to use the output error to estimate the direct leading layer of the output layer error, and then use this error to estimate the previous layer of error, so that a layer of the reverse transmission, the other layers of the error estimates. The multistage pulmonary circulation network using BP algorithm is called BP Network, which belongs to the Feedforward neural network type. Although the accuracy of this kind of error estimation is decreasing with the error itself, it still provides a more effective method for the training of multilayer network, and the multilayer feedforward neural network can approximate any nonlinear function. BP Neural network model

The neurons that make up the BP network are still neuron. According to the requirements of BP algorithm, the activation function of these neurons must be guided everywhere. The S-type function is generally used. For a neuron, its network input can be expressed as: net=x→⋅ω→

Where x→ represents the input that the neuron accepts, ω→ represents the connection weights of the neurons.

The output of the neuron is: y=f (net) =11+e−net

Further, we can ask for the derivative of y about Net:
f′ (NET) =e−net (1+e−net) 2=1+e−net−1 (1+e−net) 2=11+e−net−1 (1+e−net) 2=y (1−y)

Obviously we can notice the limnet→+∞11+e−net=1limnet→−∞1 (1+e−net) 2=0

According to the S-type activation function, the range of Y is (0, 1), thus, the domain of F ' (NET) is (0, 0.25), and the F ' (x) has the maximum value when y=0.5. The standard learning algorithm of BP network

Standard BP algorithm is based on gradient descent method of learning algorithm, learning process is by adjusting weights and thresholds, the mean square error region of the actual output value of the neural network is minimized, but it only uses the information of the first order derivative (gradient) of the mean square error function to the weight value and the threshold value, so that the algorithm has a slow convergence speed and Easy to fall into local minima and other defects.

Defined:
* Input Vector x→
* Hidden layer input vector hi→
* Hidden layer output vector ho−→
* Output layer input vector yi→
* Output layer output vector yo→
* Expected output Vector d→
* Connection weights for input and hidden layers Ωih
* The connection weights of the hidden layer and the output layer Ωho
* The threshold value of each neuron in the hidden layer BH
* The threshold value of each neuron in the output layer Bo
* Sample Data Number K
* Activation function f (⋅)

The implementation steps of the BP standard algorithm are as follows:
1. Network initialization, give wih who bh bo a random number within the interval (-1, 1), and set the error function to
E=12∑o=1q (k) −yo (k)) 2 Given the calculated precision value ε and the maximum number of learning M
2. Randomly select K Input sample x (k) −→− and corresponding expected output D (k) −→−
3. Compute the input HIH (k) of each neuron in the hidden layer, and then compute the output of each neuron in the hidden layer with input and activation function Hoh (k)
HIH (k) =∑inwihxi (k) −bhhoh (k) =f (HIH (k)) Yio (k) =∑hpwhohoh (k) −boyoo (k) =f (Yio (k))
4. Using the network expected output vector D (k) to −→− the actual output Yoo (k) of the network, compute the partial derivative δo of the error function to the neurons of the output layer (k)
Δo (k) = (Do (k) −yoo (k)) Yoo (k) (1−yoo (k))
5. Use the hidden layer to the output of the connection weights of who (k), Output layer Δo (k) and hidden layer output HOH (k) to compute the partial derivative ΔH of the error function to the neurons in the hidden layer (k)
ΔH (k) =[∑o=1qδo (k) Who]hoh (k) (1−hoh (k))
6. Use the output Hoh (k) of each neuron in the output layer and the outputs of each neuron in the hidden layer to correct the connection weights who (k) and Threshold Bo (k): Δo.
Wn+1ho (k) =wnho (k) +ηδo (k) Hoh K bn+1o (k) =bno (k) +ηδo (k)
N is adjusted before the n+1 is adjusted, η is the learning rate and the value between (0, 1) is taken.
7. ΔH (k) of each neuron in the hidden layer and input of the neurons of the input layer (k) to correct the connection weights and thresholds wn+1ih=wnih+ηδh (k) XI (k) bn+1h (k) =bn+1h (k) +ηδh (k)
8. Compute Global Error E e=12m∑k=1m∑o=1q (Do (k) −yo (k)) 2
9. To determine whether the network error meets the requirements, when the e<ε or learning times than the maximum set number of M, then the algorithm is over. Otherwise, randomly select the next learning sample and the corresponding year's expected output, return to the third step, into the next round of the learning process.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.