The MATLAB realization of BP

Source: Internet
Author: User

%2015.04.26 Kang yongxin----v 2% completion of the operation of the BP algorithm, the batch method to update the weight%%% input data format%x Matrix: Sample number * Feature dimension%y Matrix: Number of Samples * category number (in 01000 form) Close All;c Lear ALL;CLC; load data.mat;%x_test=x (1:3:30,:);% leave part of the original data as a test sample y_test=y (1:3:30,:); x_train=[x (2:3:30,:); X ( 3:3:30,:)];%x (1:2:30,:);%[x (11:25,:); x (26:30,:); x (1:10,:)];y_train=[y (2:3:30,:); y (3:3:30,:)];%%[y (11:25,:); Y ( 26:30,:); y (1:10,:)];%%% defines the variable name, initializes the network D=size (x_train,2), the% feature dimension, and the number of input layer nodes num_trains=size (x_train,1);% Training Samples n_class= Size (y_train,2);% sample category Node_layer=[d 4 n_class];% number of nodes per layer%[d 3 5 n_class];% Build more layers of network num_layer=size (node_layer,2) -1;% The network layer for i=1:1:num_layer-1 f_name{i}= ' sigmoid ';% corresponds to the activation function of each layer endf_name{num_layer}= ' tanh ';% ' sigmoid '; the last layer of activation function eta= 0.08;% Learning rate theta=10e-4;% termination condition W=cell (num_layer,1);% initialize weight matrix, all set 1,for I=1:1:num_layer W{i}=rand (Node_layer (i), Node_        Layer (i+1)); endw_init=w;% start loop item=1;while item>0 && item<1500 percent% initialization weight increment for Layer=1:1:num_layer   Delta_sum{layer}=zeros (Size (W{layer})); End percent for k=1:1:num_trains% per sampleCyclic%%%%%%%%%%%%%%%%%%%% forward calculation x_in=x_train (k,:); For layer=1:1:num_layer% forward calculation for each layer, and save output value Y_out Y_out{layer}=forward (X_in,w{layer},f_name{layer});        % batch update time w to follow the outer loop X_in=y_out{layer}; End%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% reverse propagation, the last layer to be counted, because there is only one output delta_out{layer}=y_train (k,:)-y_ou The difference between t{layer};% output and True Value J (k) =0.5*sum (delta_out{layer}.^2);% mean square error delta_error{layer}=delta_out{layer}.*d_function (y_o        Ut{layer},f_name{num_layer});% delta_w{layer}=eta* (y_out{layer-1}) ' *delta_error{layer};% the weight variation of this layer from the point of error collected from the pointing node           While layer>1% reverse propagation error, save Delta_w layer=layer-1; delta_error{layer}=delta_error{layer+1}* (W{layer+1}) '. *d_function (Y_out{layer},f_name{layer});% The error that is collected from the pointing node is weighted if layer~=1% if it is not to the first layer, and the output of the LAYER-1 layer is used as the input delta_w{layer}=eta* (y _out{layer-1}) ' *delta_error{layer};% The weight change of this layer else% if it is the first level, use this trainingPractice x as input delta_w{layer}=eta* (X_train (k,:)) ' *delta_error{layer};% The weight change of this layer end end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% batch update, to all kind of contribution to add and for Layer=1:1:num_layer Delta_sum{layer}       =delta_sum{layer}+delta_w{layer};    End End%k The contribution of a sample to the amount of weight variance figure (1);    JW (item) =sum (J);        If Item>10 delta_jw=abs (JW (item)-JW (item-1));    If Delta_jw<theta break;% loop termination condition end end Plot (ITEM,JW (item), '. ')    Hold on;      item=item+1;    % update weight for layer=1:1:num_layer% update weights for each layer, the location of this update determines whether to make a batch update or a w{layer}=w{layer}+delta_sum{layer};% batch update per update endend%%% calculation accuracy x_in=x_test;for layer=1:1:num_layer% for each layer forward calculation, and save the output value Y_out Y_out{layer}=forward (x_in,w{layer},f_        Name{layer});% bulk Update time w to follow the outer loop X_in=y_out{layer};end[c,i]=max (y_out{layer},[],2); [C,i_true]=max (y_test,[],2); Trues=find ((i-i_true) ==0); Precision=size (trues,1)/size (y_test,1)
function [Y] = forward (x,w,f_name)%forward forward calculation get output value%   input x: Input vector 1*m%        w: Weight matrix m*n%        f_name: function name (temporarily supported ' sigmoid ' Tanh ')%   output y: Output vector 1*n formula is  y=f (x*w) if strcmp (f_name, ' sigmoid ')    y=sigmoid (x*w); else if strcmp (F_name, ' Tanh ')        Y=tanh (x*w);%matlab comes with    else       disp (' wrong function name ');    EndEnd
End
function [D_f] = d_function (y,f_name)%d_founction the derivative of the correlation function the   input is the output value of the layer y if strcmp (f_name, ' sigmoid ')    d_f=y . * (1-y); else if strcmp (f_name, ' Tanh ')        D_f=1-y.^2;%matlab comes with    else       disp (' wrong function name at D_ function ');    Endendend

function [y] = sigmoid (x)%my_sigmoid%   input vector x, Output y=1./(1+exp (x)); end

The MATLAB realization of BP

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.