The basic model of single hidden layer neural network implemented by Python

Source: Internet
Author: User
Tags random seed

At the request of a friend wrote a python implementation of the single hidden layer of BP Ann Model code, long time no blog, the way to send up. This code is relatively neat, relatively pure description of the basic principles of Ann, beginners machine learning can refer to students.


Some of the more important parameters in the model:

1. Learning Rate

The learning rate is an important factor that influences the convergence of the model, in general, it should be flexibly adjusted according to the specific scene, and the high learning rate will make the function disperse quickly.

2. Number of hidden elements

In general, increasing the number of neurons in the hidden layer is more effective than directly increasing the hidden layer, which is also the characteristic of TOW layer neural network. For the problem that the complexity is not too high, the effect of the single hidden layer is better than that of the multiple hidden layer.

3. Number of random seed bits

This parameter is added to the code to control the accuracy of initializing the connection right and the threshold value. Because the initial weights and thresholds are randomly generated in neural networks, the stochastic accuracy will have a certain effect on the results. When the input element and the number of hidden elements are large, the adjustment of stochastic precision will play a role in reducing the error.


The code cited a very simple training example, and I've drawn up a rule:

Enter two variables, return type 1 when variable a = variable B, and matrix to [1,0]. When variable A! = variable B, return type 2, matrix to [0,1].


Let the neural network learn this simple rule, and give 20 test data to verify. Finally, 5,000 training data were used, and 100% of the correct classification ability was obtained.


#---Author: Woosler---#---mail:[email protected]#---2015/7/27---import randomimport math#--- Neural network Model---class ann:     #构造函数   initialization parameter     def __init__ (self, i_num, h_num, o_num):         #可调参数          self.learn_rate = 0.1     #学习率          self.num_long = 2          #输出结果位数         self.random_long = 10      #随机种子位数          #输入参数          self.input_num = i_num    #输入层   Quantity          self.hidden_num = h_num   #隐层   Quantity          Self.output_num = o_num   #输出层   Quantity          #模型参数          self.input = []           #输入层         self.hidden = []           #隐层         self.output = []           #输出层         self.error  = []           #误差          self.expectation = []     #期望          self.weight_ih = self.__ini_weight (Self.input_num, self.hidden_num)      #输入层, hidden layer   connection right         self.weight_ho = self.__ Ini_weight (self.hidden_num,&Nbsp;self.output_num)    #隐层, output layer   connection right          Self.threshold_h = self.__ini_threshold (Self.hidden_num)                 #隐层   Thresholds          Self.threshold_o = self.__ini_threshold (Self.output_num)                 #输出层   Thresholds               #初始连接权生成器     def __ini_weight (self, x, y):         result = []        long  = math.pow (10, self.random_long)         for i  in range (0, x, 1):             res = []             for j in range (0, y, 1):                 num = round ( Random.randint ( -1*long,long)/long, self.random_long)                  res.insert (J, num)              result.insert (i, res)          return result     #初始阈值生成器     def __ini_threshold (self,  N):        result = []         long = pow (10, self.random_long)         for  i in range (0, n, 1):             num = Round (Random.randint ( -1*long,long)/long, self.random_long)              result.insert (I, num)         return  result     #激励函数  sigma    def excitation (self,  Value):         sigma = 1/(1+ (Math.exp ( -1*value)))          return sigma     #输入数据      def input_param (self, data, expectation = []):         self.input = []        for value  in data:            self.input.append ( Value)         if (expectation):             self.expectation = []            for  value in expectation:                 self.expectation.append (value)      #隐层计算     def  Count_hidden (self):        self.hidden = []         for h in range (0, self.hidden_num, 1):             Hval = 0             for i in range (Len (self.input)):                 Hval +=  self.input[i] * self.weight_ih[i][h]             hval = self.excitation (hval+self.threshold_h[h])              Self.hidden.insert (h, hval)      #输出层计算     def count_output ( Self):        self.output = []         for o in range (0, self.output_num, 1):             Oval = 0             for h in range (Len (self.hidden)):                 oval += self.hidden[h] *  self.weight_ho[h][o]            oval +=  self.threshold_o[o]            oval =  round (Oval, self.num_long)             self.output.insert (O, Oval)      #误差计算     def count_error (self):         self.error = []        for key  In range (Len (self.output)):             Self.error.insert (Key, self.expectation[key] - self.output[key])     # Connection Right Feedback training   input Layer--hidden layer     def train_weight_ih (self):         for i in range (Len (self.weight_ih)):             for h in range (Len (self.weight_ih[i)):                 tmp = 0         &nbsP;       for o in range (0, self.output_num, 1):                      tmp += self.weight_ho[h][o] * self.error[o]                 self.weight_ih[i][h] = self.weight _ih[i][h] + self.learn_rate * self.hidden[h] *  (1 - self.hidden[h])  * self.input[i] * tmp                  #连接权反馈训练   Hidden layer, output layer     def train_weight_ho (self):         for h in range (Len (self.weight_ho)):             for o in range (Len ( SELF.WEIGHT_HO[H]): &NBSP;&NBSP;&NBSP;&NBsp;            self.weight_ho[h][o] =  self.weight_ho[h][o] + self.learn_rate * self.hidden[h] * self.error[o]                       #阈值反馈训练   hidden layer     def train_threshold_h (self):         for h in range (Len (self.threshold_h)):             tmp = 0             for o in range (0, self.output_num, 1):                 tmp += self.weight_ho[h][o]  * self.error[o]             Self.threshold_h[h] = self.threshold_h[h] + self.learn_rate * self.hidden[h] *  (1 - self.hidden[h])  * tmp                  #阈值反馈训练   Output layer     def train_threshold_o (self):         for o in range (Len (self.threshold_o)):             self.threshold_o[o] = self.threshold_o[o] + self.error [o]     #反馈训练     def train (self):         self.train_weight_ih ()         self.train_weight_ho ()         self.train_threshold_h ()          self.train_threshold_o ()      #归一化函数     def normal_num ( Self, max, min,&nBsp;data):        data =  (data - min)/(max -  min)         return data    # Look for the maximum and minimum value of the Set #---Business section (example)---#要训练的规则, enter two values if the two values are equal to return [1,0], and vice versa [0,1]def testfunc (val):     if (val[0] == val[1]):        return [1,0]     else:        return [0,1] #构造神经网络模型ann  =  ann (2,3,2) #生成训练数据, randomly generated 5000 groups [0,1][1,0][1,1][0,0] random array data = []for i in range (0,  10000, 1):     x = random.randint (0,1)     y =  random.randint (0,1)     data.append ([x, y]) #取得训练数据中的最大值和最小值for  i in  Range (len (data)):     for j in range (len (data[i)):         if (I&NBSP;==&Nbsp;0 and j == 0):             Max = min = data[i][j]        elif (Data[i][j]  > max):            max =  Data[i][j]        elif (data[i][j] < min):   &NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;MIN&NBSP;=&NBSP;DATA[I][J] #训练数据归一化dataNormal   = []for i in range (len (data)):     datanormal.insert (i, [])      for j in range (Len (data[i)):         Datanormal[i].append (Ann.normal_num (Max, min, data[i][j])) #计算训练数据期望值 and feedback training for i in  Range (len (data)):     #计算期望值     exp = testfunc (Data[i])       #输入Training data and Expectations     ann.input_param (DATANORMAL[I],&NBSP;EXP)      #计算隐层      ann.count_hidden ()      #计算输出层     ann.count_output ()      #计算误差     ann.count_error ()      #反馈训练      ann.train () #生成测试数据, randomly generates 20 groups of Testdata = []for i in range (0,&NBSP;20,  1):     x = random.randint (0,1)     y =  Random.randint (0,1)     testdata.append ([x, y]) #进行测试, while output neural network predictions and actual expected values For i in  range (Len (testdata)):     exp = testfunc (Testdata[i])      ann.input_param (Testdata[i])     ann.count_hidden ()     ann.count_ Output ()     print ("Ann:")     print (ann.output)      Print ("EXP:")     Print (exp)     print ("\ r") 


This article is from the "machine learning on the Road" blog, be sure to keep this source http://10574403.blog.51cto.com/10564403/1679037

The basic model of single hidden layer neural network implemented by Python

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.