Artificial neural Network (Artificial neural netwroks) Notes--2.1.3 steps in the discrete multi-output perceptron training algorithm are multiple judgments, so we say it's a discrete multiple output perceptron.
Now take the formula Wij=wij+α (YJ-OJ) Xi instead of that step
The effect of the difference between Yj and Oj on Wij is manifested by alpha (YJ-OJ) XI
The advantage of this is that it not only makes the control of the algorithm more comprehensible in structure, but also makes it more adaptable.
The algorithm flow is as follows:
1. Initialize the weight matrix W with appropriate small pseudorandom numbers
2. Initial precision control parameter ε, learning rate alpha, precision control variable d=ε+1
3.while D>=εdo
3.1 d=0;
3.2 For each sample (X,y) do
3.2.1 Input x;
3.2.2 Beg O=f (XW)
3.2.3 Modification Weight Matrix W:
for i=1 to n,j=1 to m do
Wij=Wij+α(Yj-Oj)Xi;
3.2.4 Cumulative Error
for j=1 to m do
d=d+(Yj-Oj)^2
This is already a good classifier for linear scalability.