Python code implementation of perception machine-Statistical Learning Method

Source: Internet
Author: User

Python code implementation on the perception machine ----- Statistical Learning Method

Reference: http://shpshao.blog.51cto.com/1931202/1119113

 

 

1 #! /Usr/bin/ENV Python 2 #-*-coding: UTF-8-*-3 #4 # Untitled. PY 5 #6 # copyright 2013 T-dofan <t-dofan @ T-DOFAN-PC> 7 #8 # This program is free software; you can redistribute it and/or modify 9 # it under the terms of the GNU General Public License as published by10 # the Free Software Foundation; either version 2 of the license, or11 # (at your option) any later version.12 #13 # This program is dis Tributed in the hope that it will be useful, 14 # but without any warranty; without even the implied warranty of15 # merchantability or fitness for a participant purpose. see the16 # GNU General Public License for more details.17 #18 # You shoshould have stored ed a copy of the GNU General Public license19 # along with this program; if not, write to the free software20 # Foundation, inc ., 51 Franklin s Treet, th Floor, Boston, 21 # Ma 02110-1301, usa.22 #23 #24 25 class perceptron: 26 # initialize 27 def _ init _ (self, learnrate, w0, w1, B): 28 self. learnrate = learnrate29 self. w0 = w030 self. w1 = w131 self. B = B32 33 # model 34 def model (self, X): 35 result = x [2] * (self. w0 * X [0] + self. w1 * X [1] + self. b) 36 return result37 38 # policy 39 def iserror (self, X): 40 result = self. model (x) 41 if result <= 0: 42 return true43 else: 44 return false45 46 # algorithm ---> here learnrate stands ........... 47 # adjust the policy: Wi = wi + N * wixi48 def gradientdescent (self, X): 49 self. w0 = self. w0 + self. learnrate * X [2] * X [0] # Do I need ** X [2] here based on the adjustment policy? 50 self. w1 = self. w1 + self. learnrate * X [2] * X [1] 51 self. B = self. B + self. learnrate * X [2] 52 53 54 # training 55 def traindata (self, data): 56 times = 057 done = false58 while not done: 59 For I in range (0, 6 ): 60 if self. iserror (data [I]): 61 self. gradientdescent (data [I]) 62 times ++ = 163 done = false64 break65 else: 66 done = true 67 print times68 print "rightparams: W0: % d, W1: % d, b: % d "% (self. w0, self. w1, self. b) 69 70 def testmodel (self, X): 71 result = self. w0 * X [0] + self. w1 * X [1] + self. b72 if result> 0: 73 return 174 else: 75 return-176 77 78 def main (): 79 p = perceptron (,) 80 Data = [[3, 3, 1], [, 1], [,-1], [,-1], [, 1, -1] 81 testdata = [[,-1], [,-1], [,-1], [,-1, 1], [5, 1], [5, 2, 1] 82 p. traindata (data) 83 for I in testdata: 84 print "% d" % (I [0], I [1], p. testmodel (I) 85 86 return 087 88 if _ name _ = '_ main _': 89 main ()

 

There are still a few questions, the book's adjustment strategy is: Wi = wi + Nyi * Xi, so it is necessary to multiply the optimization process by X [2]?

--------------------------------

An error is returned. In this case, X [2] is Yi.

 

 

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.