Refer to the Hangyuan Li "Statistical learning method" at the beginning of the perceptual Machine chapter, looking at not too complicated to realize ...
1 """2 the original form of perceptual machine learning algorithm3 Example 2.14 """5 ImportNumPy as NP6 7 classPerceptron:8 def __init__(self,w,b,alpha):9SELF.W =WTenSELF.B =b OneSelf.alpha =Alpha A - defloss (self,x,y): - returnNp.sum (y* (Np.dot (x, SELF.W) +self.b)) the - defSGD (self,x,y):#Stochastic gradient descent function -SELF.W + = Self.alpha * y *x -self.b + = Self.alpha *y + - defTrain (self,x,y): + while(True): AM = Len (X)#number of error classifications at forIinchRange (len (X)): - ifSelf.loss (X[i],y[i]) <=0: - SELF.SGD (X[i],y[i]) - Print "W:", SELF.W,"B:", self.b - Else: -M-= 1 in if notM: - Print "Final Optimal:","W:", SELF.W,"B:", self.b to Break + - classperceptron_dual: the def __init__(Self,alpha,b,ita): *Self.alpha =Alpha $SELF.B =bPanax NotoginsengSelf.ita =Ita - the defgram (self,x): + returnNp.dot (x,x.t) A the defTrain (self,x,y): +g =Self.gram (X) - $M = Len (X)#number of error classifications $ while(True): -M = Len (X)#number of error classifications - forJinchRange (len (X)): the ifY[J] * (Np.sum (Self.alpha * Y * g[j]) + self.b) <=0: -SELF.ALPHA[J] + =Self.itaWuyiself.b + = Self.ita *Y[j] the Print "A:", Self.alpha,"B:", self.b - Else: WuM-= 1 - ifM = =0: About Print "Final Optimal:","A:", Self.alpha,"B:", self.b $ Break - - if __name__=="__main__": - AX = Np.array ([[3,3],[4,3],[1,1]]) + theY = Np.array ([1,1,-1]) -Perc_d = Perceptron_dual (Np.zeros (y.shape), 0,1) $Perc_d.train (X, Y)
Realization of perceptual machine learning algorithm python