Data classification _ neural network based on BP neural network

Source: Internet
Author: User

Data classification based on BP Neural network

BP (back propagation) network is the 1986 by the Rumelhart and McCelland, led by the team of scientists, is an error inverse propagation algorithm training Multilayer Feedforward Network, is currently the most widely used neural network model. The BP network can learn and store a large number of input-output pattern mapping relationships without revealing the mathematical equations that describe the mapping relationship beforehand. Its learning rule is to use the steepest descent method, through the reverse propagation to continuously adjust the network weights and thresholds, so that the network error square and the smallest. The BP neural network model topology includes input layer (inputs), hidden layer (hide layer) and output layer (outputs layer). A brief introduction of 1 traditional BP algorithm

BP algorithm is a kind of supervised learning algorithm, the main idea is: input learning samples, use the reverse propagation algorithm to the network weights and deviations of the repeated adjustment training, so that the output vector and the expected vector as close as possible, when the network output layer error square and less than the specified error when the training is completed, Save the weights and deviations of the network. The specific steps are as follows:

(1) initialization, random given the connection right and valve value.

(2) Calculate the new connection right and the valve value by the given input and output mode to compute the hidden layer, output layer each unit output (3), the calculation formula is as follows:

(4) Select the next input mode to return the 2nd step of repeated training until the network set output error to meet the requirements of the end of training.

The traditional BP algorithm, in essence, transforms a set of sample input/output problems into a nonlinear optimization problem. And through the negative gradient descent algorithm, using iterative operation to solve the weight problem of a learning method, but its convergence rate is slow and easy to fall into local minima, this paper proposes a new algorithm, namely Gauss elimination method. 2 Improved BP network algorithm

2. 1 Improved algorithm overview

It was suggested that the right to be asked is solved by establishing a linear equation group of the transfer function by arbitrarily selecting a set of free rights. In this paper, the given target output is directly used as linear equation algebra and to establish a linear equation group, no longer through the transfer function inversion to calculate the net output of neurons, simplifying the operation steps. The error feedback principle is not used, so the neural network result trained by this method is equivalent to the traditional algorithm. The basic idea is that the given input and output modes can be used to establish the linear equations by acting on the neural network, and the Gauss elimination method is used to solve the linear equations to get the unknown weights, instead of using the nonlinear function error feedback of the traditional BP neural network to find the optimal idea.

2. 2 specific steps to improve the algorithm

For a given sample pattern pair, a set of free weights is randomly selected as the fixed weights between the output layer and the hidden layer, the actual output of the hidden layer is computed by transfer function, the weight value between the output layer and the hidden layer is used as the waiting quantity, and the target output is solved directly as the right of the equation. (1) Randomly given the initial weights of the neurons in the hidden layer and the input layer.

(2) The actual output of the hidden layer is calculated from the given sample input.

(3) Calculating the weights between the output layer and the hidden layer. Taking the first r neuron of the output layer as an object, the equation is established by the given output target value as the polynomial value of the equation.

(4) The weight of the output layer m neuron can be obtained by repeating the third step, and the weight matrix of the output layer plus the random fixed hidden layer and the input layer is equal to the right matrix of the last training of the neural network. 3 Examples of computer operations

%% Empty environment variable
Clc
Clear

%% Training Data forecast data
Data=importdata (' test.txt ');

% random sort from 1 to 768
K=rand (1,768);
[M,n]=sort (k);

% Input Output data
Input=data (:, 1:8);
Output =data (:, 9);

% randomly extracted 500 samples as training samples and 268 samples as forecast samples
Input_train=input (n (1:500),:) ';
Output_train=output (n (1:500),:) ';
Input_test=input (n (501:768),:) ';
Output_test=output (n (501:768),:) ';

% of input data normalized
[Inputn,inputps]=mapminmax (Input_train);

% BP Network training
%% Initialize network structure
NET=NEWFF (inputn,output_train,10);

net.trainparam.epochs=1000;
net.trainparam.lr=0.1;
net.trainparam.goal=0.0000004;

%% Net Training
Net=train (Net,inputn,output_train);

BP network prediction with percent%
% Predictive Data Normalization
Inputn_test=mapminmax (' Apply ', Input_test,inputps);

% Network Predictive output
Bpoutput=sim (net,inputn_test);

% percent Result analysis
% to find out what kind of data belongs to the network output
Bpoutput (Find (bpoutput<0.5)) = 0;
Bpoutput (Find (bpoutput>=0.5)) = 1;

% percent Result analysis
% draw a classification chart of the types and actual types of predictions
Figure (1)
Plot (bpoutput, ' og ')
Hold on
Plot (output_test, ' r* ');
Legend (' Prediction category ', ' output category ')
Title (' BP network predictive classification vs. actual category ', ' FontSize ', 12)
Ylabel (' category label ', ' FontSize ', 12)
Xlabel (' Sample number ', ' fontsize ', 12)
Ylim ([-0.5 1.5])


% forecast correct rate
rightnumber=0;
For I=1:size (output_test,2)
If Bpoutput (i) ==output_test (i)
rightnumber=rightnumber+1;
End
End
Rightratio=rightnumber/size (output_test,2) *100;

sprintf (' Test accuracy rate =%0.2f ', rightratio)










Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.