BP Neural Network is a multi-layer feedforward neural network which is trained according to the error inverse propagation algorithm, and is the most widely used neural network at present.
BP neural network error reverse propagation neural network:
- Initialization of the right and threshold values
- Given P training samples XP (p=1,2,..., p) and the corresponding ideal output DP (p=1,2,... p)
- Forward delivery of information:
Compute the output of each layer of the network
4. Error reverse Propagation
5. Right of modification and threshold value
6. Repeat the 2~5 step until p samples are trained on one side
7. Determine whether the accuracy requirements are met. If satisfied, stop training, otherwise repeat the 2nd step.
According to the above process, write code:
Class bpnet{Constructor (Layernum, N, FN, FD, Miu, ITER, EPS) {if (! ( n instanceof Array) {throw ' parameter error '} if (!n.length = = layernum) {thro W ' parameter error '} this.layernum = Layernum THIS.N = n//output function if (!FN) { This.fn = function (x) {return 1.0/(1.0 + math.exp (x))}}else {This.fn = fn}//Error function if (!FD) {this.fd = function (x) { return x * (1-x)}}else {this.fd = fd} THIS.W = new Array ()//weight Matrix this.b = new Array ()//threshold Matrix This.miu = Miu | | 0.5//Learning rate This.iter = ITER | | 500//Iteration count THIS.E = 0.0//Error this.eps = EPS | | 0.0001 for (Let L = 1, l < this.layernum;l++) {Let item = new ARray () Let Bitem = new Array () for (Let J = 0;j < N[l]; J + +) {Let temp = new Array () for (Let i = 0;i < n[l-1];i++) {Temp[i] = Math.random () } item.push (temp) Bitem.push (Math.random ())} THIS.W[L] = Item This.b[l] = Bitem}}//Predictive function forward (x) { Let y = the new Array () y[0] = X for (Let L = 1; l < this.layernum;l++) {Y[l] = new Array () for (Let J = 0;j < This.n[l]; J + +) {Let u = 0.0 for (Let I = 0;i < THIS.N[L-1]; i++) {u = U + this.w[l][j][i] * Y[l-1][i]} u = U + this.b L [j] Y[l][j] = This.fn (U)}} return y}//Calculation error Calcdelta (d, y) {Let delta = new Array () Let last = new Array () for (Let J = 0;j & Lt this.n[this.layernum-1];j++) {Last[j] = (D[j]-y[this.layernum-1][j]) * THIS.FD (y[this.layernum-1][j ]} Delta[this.layernum-1] = Last for (Let L = this.layernum-2;l > 0; l--) { DELTA[L] = new Array () for (Let J = 0;j < This.n[l]; J + +) {Delta[l][j] = 0. 0 for (Let i = 0; i < This.n[l + 1];i++) {Delta[l][j] + delta[l + 1][i] * th IS.W[L+1][I][J]} Delta[l][j] = THIS.FD (Y[l][j]) *delta[l][j]} } return Delta}//Adjust weights and thresholds update (Y, Delta) {for (Let L = 0; l < THIS.L ayernum;l++) {for (Let J = 0;j < this.n[l];j++) {for (Let i = 0;i < THIS.N[L-1]; i++) { This.w[l][j][i] + = This.miu * Delta[l][j] * Y[l-1][i] this.b[l][j] + = This.miu * Delta[l] [j]}}}//Sample training train (x, D) {for (Let p = 0;p < this.iter;p++) {THIS.E = 0 for (Let i = 0;i < x.length;i++) { Let y = This.forward (x[i]) Let delta = This.calcdelta (d[i], y) this.update (y, Delta ) Let EP = 0.0 let L1 = this.layernum-1 for (Let L = 0;l < th is.n[l1];l++) {EP + = (D[i][l]-y[l1][l]) * (D[i][l]-y[l1][l])} THIS.E + = ep/2.0} if (THIS.E < this.eps) {break; } } } }
How to use:
Using BP neural network to realize XOR or logic:
let x = [[0,0],[0,1],[1,0],[1,1]]//输入样本 let d = [[0],[1],[1],[0]]//理想输出 let bp = new BPNet(3, [2,6,1], undefined, undefined, 0.5, 5000 ,0.0001) bp.train(x,d) let y = bp.forward([0, 1]) console.log(y[2][0]) let y2 = bp.forward([0,0]) console.log(y2[2][0]) let y3 = bp.forward([1,1]) console.log(y3[2][0]) let y4 = bp.forward([1, 0]) console.log(y4[2][0])
Results:
JavaScript implements BP neural network