Reprint--About BP neural network

Source: Internet
Author: User

BP neural network The concept of BP neural network is a multilayer feedforward neural network, its main characteristic is: the signal is forward propagation, and the error is the reverse propagation. Specifically, for the following only a hidden layer of the Neural network model: (three-layer BP neural network model) The process of BP neural network is divided into two stages, the first stage is the forward propagation of the signal, from the input layer through the hidden layer, finally reached the output layer, the second stage is the reverse propagation of error, from the output layer to the hidden Finally, the input layer adjusts the weight and bias of the hidden layer to the output layer, and the weight and bias of the input layer to the hidden layer. The flow of BP neural network after knowing the characteristics of BP neural network, we need to construct the whole network according to the forward propagation of the signal and the reverse propagation of the error. 1, the initialization of the network assumes that the number of nodes in the input layer is, the number of nodes of the hidden layer is, the output layer of the number of nodes. The weight of the input layer to the hidden layer is the weight of the hidden layer to the output layer, and the bias of the input layer to the hidden layer is the bias of the hidden layer to the output layer. The learning rate is, the excitation function is. Where the excitation function takes the sigmoid function. Form: 2, the output of the hidden layer as shown in the above three-layer BP network, the output of the hidden layer is 3, the output layer output 4, the error calculation we take the error formula: which is the desired output. We remember that it can be expressed as above in the formula,,. 5, update the weight value of the update formula is: Here we need to explain the origin of the formula:This is the process of error reverse propagation, our goal is to make the error function to the minimum value, that is, we use gradient descent method:
    • Weight update for hidden layer to output layer
The formula for updating the weights is:
    • Weight update for input layer to hidden layer
Where the weight is updated with the formula: 6, the offset update offset formula is:
    • Implicit layer-to-output layer offset update
Then the offset's update formula is:
    • Bias update for input layer to hidden layer
Where the offset of the update formula is: 7, to determine whether the algorithm iteration end there are many ways to determine whether the algorithm has been convergent, the common has a specified iteration of algebra, to determine whether the difference between the adjacent two errors is less than the specified value and so on. Third, the experiment simulation in this experiment, we use the BP neural network to deal with a four classification problem, the final classification result is: MATLAB code Main program [Plain]View Plaincopy
  1. Main functions of percent-percent BP
  2. % empty
  3. Clear all;
  4. CLC
  5. % Import Data
  6. Load data;
  7. % randomly sorted from 1 to 2000
  8. K=rand (1,2000);
  9. [M,n]=sort (k);
  10. % input/output data
  11. Input=data (:, 2:25);
  12. OUTPUT1 =data (:, 1);
  13. % turns output from 1 to 4-D
  14. For i=1:2000
  15. Switch OUTPUT1 (i)
  16. Case 1
  17. Output (i,:) =[1 0 0 0];
  18. Case 2
  19. Output (i,:) =[0 1 0 0];
  20. Case 3
  21. Output (i,:) =[0 0 1 0];
  22. Case 4
  23. Output (i,:) =[0 0 0 1];
  24. End
  25. End
  26. % randomly extracts 1500 samples for training samples, 500 samples for prediction samples
  27. Traincharacter=input (n (1:1600),:);
  28. Trainoutput=output (n (1:1600),:);
  29. Testcharacter=input (n (1601:2000),:);
  30. Testoutput=output (n (1601:2000),:);
  31. % normalization of the characteristics of the training
  32. [Traininput,inputps]=mapminmax (Traincharacter ');
  33. Initialization of the Percent parameter
  34. Initialization of the% parameter
  35. Inputnum = number of nodes in the 24;% input layer
  36. Hiddennum = number of nodes in 50;% hidden layer
  37. Outputnum = number of nodes in the 4;% output layer
  38. % weight and initialization of bias
  39. W1 = Rands (Inputnum,hiddennum);
  40. B1 = Rands (hiddennum,1);
  41. W2 = Rands (Hiddennum,outputnum);
  42. B2 = Rands (outputnum,1);
  43. % Learning Rate
  44. Yita = 0.1;
  45. Training of percent-net
  46. for r = 1:30
  47. E (r) = 0;% Statistical error
  48. for m = 1:1600
  49. Positive flow of% information
  50. x = Traininput (:, M);
  51. The output of the% hidden layer
  52. for j = 1:hiddennum
  53. Hidden (j,:) = W1 (:, J) ' *x+b1 (j,:);
  54. Hiddenoutput (j,:) = g (Hidden (j,:));
  55. End
  56. Output of% output layer
  57. Outputoutput = W2 ' *hiddenoutput+b2;
  58. % calculation error
  59. E = Trainoutput (M,:) '-outputoutput;
  60. E (r) = e (R) + SUM (ABS (E));
  61. % modifier weights and offsets
  62. % hidden Layer-to-output layer weight and offset adjustment
  63. DW2 = Hiddenoutput*e ';
  64. DB2 = e;
  65. % weight and bias adjustment for input layer to hidden layer
  66. for j = 1:hiddennum
  67. Partone (j) = Hiddenoutput (j) * (1-hiddenoutput (j));
  68. Parttwo (j) = W2 (J,:) *e;
  69. End
  70. For i = 1:inputnum
  71. for j = 1:hiddennum
  72. DW1 (i,j) = Partone (j) *x (i,:) *parttwo (j);
  73. DB1 (j,:) = Partone (j) *parttwo (j);
  74. End
  75. End
  76. W1 = w1 + yita*dw1;
  77. W2 = w2 + yita*dw2;
  78. B1 = B1 + yita*db1;
  79. b2 = b2 + yita*db2;
  80. End
  81. End
  82. Classification of percent-of-speech feature signals
  83. Testinput=mapminmax (' Apply ', Testcharacter ', Inputps);
  84. for m = 1:400
  85. for j = 1:hiddennum
  86. Hiddentest (j,:) = W1 (:, J) ' *testinput (:, M) +b1 (J,:);
  87. Hiddentestoutput (j,:) = g (Hiddentest (J,:));
  88. End
  89. Outputoftest (:, m) = W2 ' *hiddentestoutput+b2;
  90. End
  91. Analysis of the result of percent
  92. % based on network output find out what kind of data it belongs to
  93. For m=1:400
  94. Output_fore (M) =find (Outputoftest (:, m) ==max (Outputoftest (:, m)));
  95. End
  96. %BP Network prediction Error
  97. ERROR=OUTPUT_FORE-OUTPUT1 (n (1601:2000)) ';
  98. K=zeros (1,4);
  99. % find out what kind of classification the error of judgment belongs to
  100. For i=1:400
  101. If error (i) ~=0
  102. [B,c]=max (Testoutput (i,:));
  103. Switch C
  104. Case 1
  105. K (1) =k (1) +1;
  106. Case 2
  107. K (2) =k (2) +1;
  108. Case 3
  109. K (3) =k (3) +1;
  110. Case 4
  111. K (4) =k (4) +1;
  112. End
  113. End
  114. End
  115. % identify each class of individuals and
  116. Kk=zeros (1,4);
  117. For i=1:400
  118. [B,c]=max (Testoutput (i,:));
  119. Switch C
  120. Case 1
  121. KK (1) =kk (1) +1;
  122. Case 2
  123. KK (2) =kk (2) +1;
  124. Case 3
  125. KK (3) =kk (3) +1;
  126. Case 4
  127. KK (4) =kk (4) +1;
  128. End
  129. End
  130. % correct rate
  131. Rightridio= (kk-k)./kk

activation function [Plain]View Plaincopy
    1. Percent activation function
    2. function [y] = g (x)
    3. y = 1./(1+exp (-X));
    4. End

Reprint--About BP neural network

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.