BP neural network The concept of BP neural network is a multilayer feedforward neural network, its main characteristic is: the signal is forward propagation, and the error is the reverse propagation. Specifically, for the following only a hidden layer of the Neural network model: (three-layer BP neural network model) The process of BP neural network is divided into two stages, the first stage is the forward propagation of the signal, from the input layer through the hidden layer, finally reached the output layer, the second stage is the reverse propagation of error, from the output layer to the hidden Finally, the input layer adjusts the weight and bias of the hidden layer to the output layer, and the weight and bias of the input layer to the hidden layer. The flow of BP neural network after knowing the characteristics of BP neural network, we need to construct the whole network according to the forward propagation of the signal and the reverse propagation of the error. 1, the initialization of the network assumes that the number of nodes in the input layer is, the number of nodes of the hidden layer is, the output layer of the number of nodes. The weight of the input layer to the hidden layer is the weight of the hidden layer to the output layer, and the bias of the input layer to the hidden layer is the bias of the hidden layer to the output layer. The learning rate is, the excitation function is. Where the excitation function takes the sigmoid function. Form: 2, the output of the hidden layer as shown in the above three-layer BP network, the output of the hidden layer is 3, the output layer output 4, the error calculation we take the error formula: which is the desired output. We remember that it can be expressed as above in the formula,,. 5, update the weight value of the update formula is:
Here we need to explain the origin of the formula:This is the process of error reverse propagation, our goal is to make the error function to the minimum value, that is, we use gradient descent method:
- Weight update for hidden layer to output layer
The formula for updating the weights is:
- Weight update for input layer to hidden layer
Where the weight is updated with the formula: 6, the offset update offset formula is:
- Implicit layer-to-output layer offset update
Then the offset's update formula is:
- Bias update for input layer to hidden layer
Where the offset of the update formula is: 7, to determine whether the algorithm iteration end there are many ways to determine whether the algorithm has been convergent, the common has a specified iteration of algebra, to determine whether the difference between the adjacent two errors is less than the specified value and so on. Third, the experiment simulation in this experiment, we use the BP neural network to deal with a four classification problem, the final classification result is: MATLAB code
Main program
[Plain]View Plaincopy
- Main functions of percent-percent BP
- % empty
- Clear all;
- CLC
- % Import Data
- Load data;
- % randomly sorted from 1 to 2000
- K=rand (1,2000);
- [M,n]=sort (k);
- % input/output data
- Input=data (:, 2:25);
- OUTPUT1 =data (:, 1);
- % turns output from 1 to 4-D
- For i=1:2000
- Switch OUTPUT1 (i)
- Case 1
- Output (i,:) =[1 0 0 0];
- Case 2
- Output (i,:) =[0 1 0 0];
- Case 3
- Output (i,:) =[0 0 1 0];
- Case 4
- Output (i,:) =[0 0 0 1];
- End
- End
- % randomly extracts 1500 samples for training samples, 500 samples for prediction samples
- Traincharacter=input (n (1:1600),:);
- Trainoutput=output (n (1:1600),:);
- Testcharacter=input (n (1601:2000),:);
- Testoutput=output (n (1601:2000),:);
- % normalization of the characteristics of the training
- [Traininput,inputps]=mapminmax (Traincharacter ');
- Initialization of the Percent parameter
- Initialization of the% parameter
- Inputnum = number of nodes in the 24;% input layer
- Hiddennum = number of nodes in 50;% hidden layer
- Outputnum = number of nodes in the 4;% output layer
- % weight and initialization of bias
- W1 = Rands (Inputnum,hiddennum);
- B1 = Rands (hiddennum,1);
- W2 = Rands (Hiddennum,outputnum);
- B2 = Rands (outputnum,1);
- % Learning Rate
- Yita = 0.1;
- Training of percent-net
- for r = 1:30
- E (r) = 0;% Statistical error
- for m = 1:1600
- Positive flow of% information
- x = Traininput (:, M);
- The output of the% hidden layer
- for j = 1:hiddennum
- Hidden (j,:) = W1 (:, J) ' *x+b1 (j,:);
- Hiddenoutput (j,:) = g (Hidden (j,:));
- End
- Output of% output layer
- Outputoutput = W2 ' *hiddenoutput+b2;
- % calculation error
- E = Trainoutput (M,:) '-outputoutput;
- E (r) = e (R) + SUM (ABS (E));
- % modifier weights and offsets
- % hidden Layer-to-output layer weight and offset adjustment
- DW2 = Hiddenoutput*e ';
- DB2 = e;
- % weight and bias adjustment for input layer to hidden layer
- for j = 1:hiddennum
- Partone (j) = Hiddenoutput (j) * (1-hiddenoutput (j));
- Parttwo (j) = W2 (J,:) *e;
- End
- For i = 1:inputnum
- for j = 1:hiddennum
- DW1 (i,j) = Partone (j) *x (i,:) *parttwo (j);
- DB1 (j,:) = Partone (j) *parttwo (j);
- End
- End
- W1 = w1 + yita*dw1;
- W2 = w2 + yita*dw2;
- B1 = B1 + yita*db1;
- b2 = b2 + yita*db2;
- End
- End
- Classification of percent-of-speech feature signals
- Testinput=mapminmax (' Apply ', Testcharacter ', Inputps);
- for m = 1:400
- for j = 1:hiddennum
- Hiddentest (j,:) = W1 (:, J) ' *testinput (:, M) +b1 (J,:);
- Hiddentestoutput (j,:) = g (Hiddentest (J,:));
- End
- Outputoftest (:, m) = W2 ' *hiddentestoutput+b2;
- End
- Analysis of the result of percent
- % based on network output find out what kind of data it belongs to
- For m=1:400
- Output_fore (M) =find (Outputoftest (:, m) ==max (Outputoftest (:, m)));
- End
- %BP Network prediction Error
- ERROR=OUTPUT_FORE-OUTPUT1 (n (1601:2000)) ';
- K=zeros (1,4);
- % find out what kind of classification the error of judgment belongs to
- For i=1:400
- If error (i) ~=0
- [B,c]=max (Testoutput (i,:));
- Switch C
- Case 1
- K (1) =k (1) +1;
- Case 2
- K (2) =k (2) +1;
- Case 3
- K (3) =k (3) +1;
- Case 4
- K (4) =k (4) +1;
- End
- End
- End
- % identify each class of individuals and
- Kk=zeros (1,4);
- For i=1:400
- [B,c]=max (Testoutput (i,:));
- Switch C
- Case 1
- KK (1) =kk (1) +1;
- Case 2
- KK (2) =kk (2) +1;
- Case 3
- KK (3) =kk (3) +1;
- Case 4
- KK (4) =kk (4) +1;
- End
- End
- % correct rate
- Rightridio= (kk-k)./kk
activation function
[Plain]View Plaincopy
- Percent activation function
- function [y] = g (x)
- y = 1./(1+exp (-X));
- End
Reprint--About BP neural network