A simple and easy-to-learn machine learning algorithm--BP neural network of Neural network

Source: Internet
Author: User

first, the concept of BP neural networkBP Neural Network is a multilayer feedforward neural network, its basic characteristics are: the signal is forward propagation, and the error is the reverse propagation. in detail. For example, a neural network model with only one hidden layer, such as the following:
(three-layer BP neural network model)the process of BP neural network is mainly divided into two stages. The first stage is the forward propagation of the signal, which passes through the hidden layer from the input layer. Finally, the output layer is reached; the second stage is the reverse propagation of the error, from the output layer to the hidden layer. Finally, the input layer adjusts the weight and bias of the hidden layer to the output layer, and the weight and bias of the input layer to the hidden layer. ii. flow of BP neural networkafter knowing the characteristics of BP neural network, we need to construct the whole network according to the forward propagation of signal and the reverse propagation of error. 1, the initialization of the networkIf the number of nodes in the input layer is , the number of nodes in the hidden layer is , the number of nodes in the output layer .

The weight of the input layer to the hidden layer is the weight of the hidden layer to the output layer, and the bias of the input layer to the hidden layer is the bias of the hidden layer to the output layer. The learning rate is, the excitation function is. the excitation function takes the sigmoid function. The form is:


2, the output of the hidden layeras seen in the above three-layer BP network, the output of the hidden layer is
3, Output layer output
4, the calculation of the errorwe take the error formula:
the desired output.

We remember that it can be expressed as


the above formula. ,,。 5, the renewal of the weight valuethe update formula for the weight value is:
Here we need to explain the origin of the formula:This is the process of error reverse propagation, our goal is to make the error function to achieve the minimum value. That We use the gradient descent method:
    • Weight update for hidden layer to output layer
The formula for updating the weights is:
    • Weight update for input layer to hidden layer
among


The formula for updating the weights is:
6. Offset UpdatesThe offset update formula is:
    • Implicit layer-to-output layer offset update
then the offset's update formula is:
    • Bias update for input layer to hidden layer
among


then the offset's update formula is:
7, infer whether the algorithm iteration is overThere are many ways to infer whether an algorithm has been convergent, a common algebra with a specified iteration, to infer whether the difference between two consecutive errors is less than the specified value, and so on.

three, the simulation of the experimentin this experiment. We use the BP neural network to deal with a four classification problem, and finally the classification result is:
matlab code Main program
Percent BP main function% empty clear all;clc;% import data load data;% random sort K=rand (1,2000) from 1 to 2000;            [M,n]=sort (k);% input/output data Input=data (:, 2:25); Output1 =data (:, 1);% turn output from 1 to 4 D for i=1:2000 switch OUTPUT1 (i) Case 1        Output (i,:) =[1 0 0 0];        Case 2 Output (i,:) =[0 1 0 0];        Case 3 Output (i,:) =[0 0 1 0];    Case 4 Output (i,:) =[0 0 0 1]; endend% randomly extracted 1500 samples for training samples, 500 samples for pre-measured samples traincharacter=input (n (1:1600),:); Trainoutput=output (n (1:1600),:); Testcharacter=input (n (1601:2000),:); Testoutput=output (n (1601:2000),:);% normalization of the characteristics of the training [traininput,inputps]= Mapminmax (Traincharacter '); Initialization of a percent parameter initialization of the% parameter Inputnum = 24;% The number of nodes in the input layer Hiddennum = 50;% number of nodes in the hidden layer outputnum = number of nodes in the 4;% output layer The initialization of weights and biases w1 = Rands (inputnum,hiddennum); b1 = Rands (hiddennum,1); w2 = Rands (hiddennum,outputnum); b2 = Rands (outputnum,1        );% learning rate Yita = 0.1;%% Network training for r = E (r) = 0;% Statistical error for m = 1:1600% The positive flow of information x = Traininput (:, M); The output of the% hidden layer for j = 1:hiddennum Hidden (j,:) = W1 (:,j) ' *x+b1 (j,:);        Hiddenoutput (j,:) = g (Hidden (j,:));                End output of output layer Outputoutput = W2 ' *hiddenoutput+b2;        % calculation Error E = Trainoutput (M,:) '-outputoutput;                E (r) = e (R) + SUM (ABS (E));        % change weight and bias% hidden layer to output layer weight and offset adjustment dw2 = Hiddenoutput*e ';                DB2 = e;            The weight and bias adjustment of the input layer to the hidden layer for j = 1:hiddennum Partone (j) = Hiddenoutput (j) * (1-hiddenoutput (j));        Parttwo (j) = W2 (J,:) *e; End for i = 1:inputnum for j = 1:hiddennum DW1 (i,j) = Partone (j) *x (i,:) *parttwo (j                );            DB1 (j,:) = Partone (j) *parttwo (j);        End END w1 = W1 + yita*dw1;        W2 = w2 + yita*dw2;        B1 = B1 + yita*db1;      b2 = b2 + yita*db2; endend%% Speech feature signal classification Testinput=mapminmax (' Apply ', Testcharacter ', Inputps); for m = 1:400 for j = 1:hiddennum Hiddentes        T (j,:) = W1 (:, J) ' *testinput (:, M) +b1 (J,:); Hiddentestoutput (J,:) = g (Hiddentest (J,:)); End Outputoftest (:, m) = W2 ' *hiddentestoutput+b2;end%% result analysis% based on network output find out what kind of data belongs to m=1:400 Output_fore (m) =find (outputoftes  T (:, M) ==max (Outputoftest (:, M))), END%BP Network prediction Error ERROR=OUTPUT_FORE-OUTPUT1 (n (1601:2000)) '; K=zeros (1,4);        % find out which category of the inferred error belongs to which class for i=1:400 if error (i) ~=0 [B,c]=max (Testoutput (i,:));            Switch C Case 1 K (1) =k (1) +1;            Case 2 K (2) =k (2) +1;            Case 3 K (3) =k (3) +1;        Case 4 K (4) =k (4) +1;    End endend% finds each class of individuals and Kk=zeros (1,4); for i=1:400 [B,c]=max (Testoutput (i,:));        Switch C Case 1 KK (1) =kk (1) +1;        Case 2 KK (2) =kk (2) +1;        Case 3 KK (3) =kk (3) +1;    Case 4 KK (4) =kk (4) +1; endend% correct rate rightridio= (kk-k)./kk

activation function
Percent activation function functions [y] = g (x)    y = 1./(1+exp (x)); end


A simple and easy-to-learn machine learning algorithm--BP neural network of Neural network

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.