Perceptron Training algorithm----discrete single output

Source: Internet
Author: User
Tags constant
Perceptron is an early neural network model, presented by American scholar F.rosenblatt in 1957. Because the concept of learning was introduced for the first time in Perceptron, the learning function possessed by the human brain was simulated in a mathematical model based on symbolic processing, so it aroused a wide concern.

The simple Perceptron model is still the structure of the M-P model, but it can gradually enhance the ability of the pattern division by using supervised learning to achieve the purpose of learning.


The Perceptron Processing unit weights and operates on n inputs, namely:



The perceptual device is similar in form to the M-P model, and the difference between them is the change of the connection right between neurons. The connection rights of the Perceptron are defined as mutable, so that the perceptron is given the learning characteristics.

Discrete single output Perceptron training algorithm:
1. Initialization weight vector w;
2. Repeat the following procedure until the training is complete:
For each sample in the sample set X
(1) Enter x;
(2) Calculating o=f (XW);
(3) If the output is incorrect,
When o=0, take w=w+x
When O=1, take w=w-x
(

If the output of the first neuron is correct, namely: Ai=ti, then the weights wij and the deviation value bi of the connection with the first neuron remain unchanged;
If the output of the first neuron is 0, but the expected output is 1, that is, there is ai=0, and Ti=1, at this time the weight correction algorithm is: The new weight wij for the old weight wij plus the transmission vector PJ; Similarly, the new bias BI adds its input 1 to the old deviation bi;
If the output of the first neuron is 1, but the expected output is 0, which is ai=1, and ti=0, then the weight correction algorithm is: The new weight value wij equals the old weight wij minus the input vector pj; Similarly, the new bias bi subtracts 1 from the old bias bi.
The essence of the Perceptron learning rule is that the weight change is equal to the positive and negative input vectors.

Correct output is corrected only, not corrected correctly

)
In the above algorithm, when o=0, press W+x to modify the weight vector W. This is because the ideal output should have been 1, but now it is 0, so the corresponding right should be increased, and is to increase the actual output of the sample to the real party contribution rights. When O=1 is the opposite.


For example:

For two classes of classification to calculate weights:

First Category: Wktx>0

Type II: wktx<0

1. Error classification fix WK
such as W (k) t*x≤0 and x∈ω1 (First Class) wk+1= Wk+ρkx
such as Wktx≥0 and X∈ω2 (class II) wk+1= WK-ΡKX
2. Correct classification, WK does not fix
such as Wktx>0 and x∈ω1
such as wktx<0 and x∈ω2
wk+1= wk


Ρk selection Criteria


① fixed increment principle ρk fixed non-negative number

② Absolute Correction Rules ρk>

③ Partial correction Rule ρk=λ0 <λ≤2

Algorithm examples:

Using Perceptron, w-h fixed increment method to find discriminant function

Class myperceptron{

Protected
Vector<point2type> Pointsatype;
Vector<int> classes;//Placing class tags
Public
Myperceptron (vector<point>* pointst,vector<int> classest) {
for (int i=0;i<pointst->size (); i++)
{
MyPoint Mp={pointst->at (i). X,pointst->at (i). y,1};
Point2type m2t={mp,classest.at (i)};
Pointsatype.push_back (M2T);
}
}
Protected
int getproduct (Weight w,mypoint X) {
int Result=w.w1*x.x1+w.w2*x.x2+w.w3*x.increx;
return result;
}
Weight Fixedresultadd (Weight w,mypoint x,double ro)
{
w.w1+=ro*x.x1;
w.w2+=ro*x.x2;
W.w3+=ro*x.increx;
return W;
}

Weight fixedresultsub (Weight w,mypoint x,double ro)
{
w.w1-=ro*x.x1;
w.w2-=ro*x.x2;
W.w3-=ro*x.increx;
return W;
}
Weight GETRESULTW (Weight w,int maxtimes,double ro) {
if (maxtimes>=100)//maximum number of iterations
{
return W;
}
int flag=0;//Determine if the weight value has changed
int product=0;
for (int i=0;i<pointsatype.size (); i++)
{
cout<<mypointst.at (i). Type;
Product=getproduct (w,pointsatype.at (i). MyPoint);
if (product<=0 && pointsatype.at (i). Type==1)
{
flag=1;
W=fixedresultadd (w,pointsatype.at (i). Mypoint,ro);
}
if (product>=0 && pointsatype.at (i). type==2)
{
flag=1;
W=fixedresultsub (w,pointsatype.at (i). Mypoint,ro);
}
}
if (flag = = 1)
{
maxtimes=maxtimes+1;
Return GETRESULTW (W,maxtimes,ro);
}
return W;

}
Public
Weight Getfinalw () {
Weight w={1,1,1};
Weight WRe;
WRE=GETRESULTW (w,1,1);
return WRe;
}
};

struct mypoint{
    int x1;
    int x2;
    int increx;//= 1, increment mode, that is, constant
};
struct point2type{
    mypoint mypoint;
    int Type;//type is divided into 1, 2, respectively, that is the first class (>0) or the second Class (<0)
};

struct weight{
    int w1;//corresponding to x1 weights
    int w2;//weights for X2
     int w3;//corresponds to x3 weight, constant coefficient
};

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.