First Kind
%%% Solving XOR problem with neural network clearclcclosems=4;% set 4 samples a=[0 0;0 1;1 0;1 1];% Set input vector y=[0,1,1,0];% set output vector n=2;% number of inputs m=3;% the number of hidden layers k=1;% the number of output layers W=rand (n,m);% is the value of the input layer to the hidden layer to assign the initial values V=rand (M,K); The weight value of the hidden layer to the output layer is weighted Yyuzhi=rand (1,m), the threshold value of the input layer to the hidden layer is assigned the initial Scyuzhi=rand (total), and the% is the threshold value of the hidden layer to the output layer maxcount=10000;% set the maximum count precision= 0.0001;% set the precision speed=0.2;% set the training rate count=1;% set the initial value of the counter while (Count<=maxcount) cc=1;%cc for the first few samples% sample number is less than ms=4 when executing while (cc< =MS)% the desired output of the output layer of the CC sample is calculated for L=1:k O (L) =y (cc); End% Gets the input vector of the CC sample for i=1:n x (i) =a (cc,i); End percent% calculates the input and output of the hidden layer%b (j) is the output of the hidden layer, and the transfer function is the Logsig function for j=1:m s=0; For I=1:n s=s+w (i,j) *x (i); End S=s-yyuzhi (j); B (j) =1/(1+ (-s)); End percent% calculates the output layer input output%b (j) is the output layer input, C is the output layer output, the transfer function is the Logsig function%for t=1:k here K is 1, so the loop does not write for t=1:k ll= 0; For J=1:m ll=ll+v (j,t) *b (j); End Ll=ll-scyuzhi (t); End%c (t) =l/(1+exp (-L)) citation K is 1, so the direct use of the following formula% c=l/(1+EXP (-LL)); If Ll<0 c=0; else c=1; End percent% calculation error errort= ((O (L)-c) ^2) Errortt (cc) =errort; % calculates the generalized error scyiban= (O (L)-c) *c* (1-C) for each unit of the output layer; % calculation of the generalized error of the hidden layer for j=1:m E (j) =scyiban*v (J) *b (j) * (1-b (j)); End% fixed the implicit layer to output layer connection weights and output layer thresholds for J=1:m V (j) =v (J) +speed*scyiban*b (j); End Scyuzhi=scyuzhi-speed*scyiban; % fixed input layer to middle layer weights and thresholds for I=1:n for J=1:m W (i,j) =w (I,j) +speed*e (j) *x (i); End end for J=1:m Yyuzhi (j) =yyuzhi (J)-speed*e (j); End cc=cc+1; End percent% calculates the error tmp=0 after one count; For I=1:ms Tmp=tmp+errortt (i) *errortt (i); End Tmp=tmp/ms; Error (count) =sqrt (TMP); The% is judged to be less than the error accuracy if (Error (count) <precision) break; End Count=count+1;enderrorttcountp=1:count-1;plot (P,error (p))
The second Kind
Percent percent with MATLAB toolbox to achieve XOR P=[0 0 1 1;0 1 0 1];%p for the input t=[0 1 1 0];%t is the ideal output% hidden layer has 2 neurons, the output layer has 1 neurons, the transmission function of the hidden layer for the Logsig function% output layer of the transfer function Purelin function net=newff (Minmax (p), [2,1],{ ' Logsig ', ' Purelin '}, ' TRAINLM '), net.trainparam.epochs=1000;% the maximum number of training is 1000net.trainparam.goal=0.0001;% The training accuracy for the 0.0001lp.lr=0.1;% training learning rate for 0.1net.trainparam.show=20;% shows the iterative process of training net=train (net,p,t);% start Training Out=sim (net,p);% Simulation Verification with SIM function
"Neural network" BP algorithm solves XOR problem and MATLAB version