Perceptron Model Analysis
Perceptron Neuron Model
Single-layer Perceptron model:
-"Model total input-" model output
Weighted matrix row number-"Output number"
Weighted matrix column number-"Input number"
Offset matrix row number-number of outputs
Total input y======>>>y<0-"0
y>=0-"1 (0/1 for total output)
Wij is the connection value between the first neuron (the latter layer) and the first J (previous layer) neuron
Perceptual Models for classification
Learning training Algorithm (learning = = = Changing weight)
T: Ideal Output
Training Steps
1 To solve the problem, determine the input vector X, target vector T, so as to determine the dimensions and network structure parameters, n,m;
2) parameter initialization;
3) Set the maximum cycle times;
4 compute the network output;
5) Check the output vector y and the target vector T is the same, if the same, or to reach the maximum number of cycles, training ended, otherwise transferred to 6;
6) Learning
and returns 4). Discrete single output perceptron
Training algorithm of discrete single output perceptron
1. Initialization weight vector W, threshold B;
2. Repeat the following process until the training is complete:
2.1 for each sample (X,y), repeat the following procedure:
2.1.1 Input x
2.1.2 Calculation Output o=f (wxt+b);
2.1.3 The weights matrix W and threshold B according to the following formula
w=w+ (y-o) X
b=b+ (Y-o)
discrete multiple output perceptron
Training algorithm of discrete multi-output perceptron
Sample set: {(x,y) | y corresponds to input vector x output}
Input vector: x= (X1,X2,...,XN)
Ideal output vector: y= (Y1,Y2,...,YM)
Activation function: F
Weight Matrix w= (Wij)
Threshold Vector b= (B1,B2,...,BM)
Actual output vector: o= (o1,o2,...,om)
1. Initialization weight matrix W, threshold vector b;
2. Repeat the following process until the training is complete:
2.1 for each sample (X,y), repeat the following procedure:
2.1.1 Input x
2.1.2 Calculation Output o=f (wxt+b);
2.1.3 for I=1 to M performs the following actions:
bi=bi+ (Yi-oi)
For j=1 ton
wij=wij+ (yi-oi) *xj
continuous multi-output Perceptron training algorithm (* * *)
1. Initialization weight matrix W, threshold vector b;
2. initial Precision control parameter e, learning rate a, precision control variable d= e+1;
3. While D³e do
3.1 d=0;
3.2 for each sample (X,y) do
3.2.1 input x;
3.2.2 Calculation Output o=f (wxt+b);
3.2.3 Modification Weight matrix W and threshold vector B;
3.2.4 Cumulative Error
For i=1to m do
d=d+ (yi-oi) 2
Correction: w[i] = W[i] +α* (Y[i]–t[i]) *p[i];
B[i] = B[i] +α* (y[i]–t[i));
The alpha value is 0~1 decimal, and my alpha value is 0.2. T is the target value.
In order to determine the training effect, here you need to define an error rate, which is defined as an error: E =∑ (Y-T) 2