Recently in the study of Artificial neural network (Artificial neural netwroks), make notes, organize ideas
Discrete single output perceptron algorithm, the legendary MP
Two-valued Network: The value of the independent variable and its function, the value of the vector component only takes 0 and 1 functions, vectors
Weight vector: w= (W1,W2,W3.....WN)
Input vector: x= (X1,X2,X3.....XN)
Training Sample Set
{(x,y) | Y is the output of input vector x}
The training process is relatively simple
As follows:
1, initialization weight vector w
2, repeat the following procedure until the training is complete:
2.1 For each sample (X,y), repeat the following procedure:
2.1.1输入X
2.1.2计算 O=F(XW)
2.1.3如果输出不正确,则改变权重
当O=1时,W=W-X
当O=0时,W=W+X
It's interesting to adjust the weight vector.
It's hard to figure it out at first, but if you're just asking for an approximate solution, then it's easy. The value is just a constant approximation of the process of change.
A little bit of initialization weights are random, so the process of adjusting weights is more random.
I've been thinking about support vector machines (SVM), so it's a bit sleepy to adjust for such a weight.
SVM is the global optimal, and Ann is the local optimal, so the process of weight adjustment of Ann is simpler.