Python implements basic model of a single hidden layer Neural Network

Source: Internet
Author: User

Python implements basic model of a single hidden layer Neural Network

As a friend, I wrote a python code for implementing the Single-hidden layer BP Ann model. If I haven't written a blog for a long time, I will send it by the way. This code is neat and neat. It simply describes the basic principles of Ann and can be referenced by beginners of machine learning.

 

Several important parameters in the model:

1. Learning Rate

The learning rate is an important factor that affects model convergence. Generally, it needs to be adjusted flexibly according to specific scenarios. A high learning rate will make the function quickly divergent.

2. Number of hidden elements

In general, increasing the number of neurons in the hidden layer is more effective than directly increasing the hidden layer, which is also a characteristic of a single hidden layer neural network. For issues with low complexity, a single hidden layer is superior to multiple hidden layers.

3. Random Number of Seeds

This parameter is added to the Code to control the accuracy of initial connection weights and thresholds. Because the initial weights and thresholds in the neural network are randomly generated, the random accuracy of the weights will have a certain impact on the results. When the number of input and hidden elements is large, adjusting the random precision will reduce the error.

 

The Code provides a very simple training example. The author makes a rule:

Enter two variables. When variable A = variable B, return type 1. The matrix is [1, 0]. When variable! = When variable B is used, the return type 2 is returned, and the matrix is [0, 1].

 

Let the neural network learn this simple rule and give 20 pieces of test data for verification. After using 5000 pieces of training data, 100% of the correct classification capability is obtained.

 

# --- Author: Wu Si Lei --- # --- Mail: wusilei@1006. TV --- # --- 2015/7/27 --- importrandomimportmath # --- Neural Network Model --- classAnn: # constructor initializes model parameters def _ init _ (self, I _num, h_num, o_num): # adjustable parameters self. learn_rate = 0.1 # learning rate self. num_long = 2 # Number of output result digits self. random_long = 10 # Random Number of seeds # input parameter self. input_num = I _num # Number of input Layers self. hidden_num = h_num # Number of Hidden Layers self. output_num = o_num # Number of output layers # model parameter self. input = [] # input layer self. hidden = [] # hidden layer self. output = [] # output layer self. error = [] # error self. expectation = [] # expect self. weight_ih = self. _ ini_weight (self. input_num, self. hidden_num) # input layer-> hidden layer connection permission self. weight_ho = self. _ ini_weight (self. hidden_num, self. output_num) # hidden layer-> output layer connection permission self. threshold_h = self. _ ini_threshold (self. hidden_num) # hidden layer threshold self. threshold_o = self. _ ini_threshold (self. output_num) # output layer threshold # def _ ini_weight (self, x, y): result = [] long = math. pow (10, self. random_long) foriinrange (0, x, 1): res = [] forjinrange (0, y, 1): num = round (random. randint (-1 * long, long)/long, self. random_long) res. insert (j, num) result. insert (I, res) returnresult # initial threshold generator def _ ini_threshold (self, n): result = [] long = pow (10, self. random_long) foriinrange (0, n, 1): num = round (random. randint (-1 * long, long)/long, self. random_long) result. insert (I, num) returnresult # excitation function sigmadefexcitation (self, value): sigma = 1/(1 + (math. exp (-1 * value) returnsigma # input data definput_param (self, data, expectation = []): self. input = [] forvalueindata: self. input. append (value) if (expectation): self. expectation = [] forvalueinexpectation: self. expectation. append (value) # defcount_hidden (self): self. hidden = [] forhinrange (0, self. hidden_num, 1): Hval = 0 foriinrange (len (self. input): Hval + = self. input [I] * self. weight_ih [I] [h] Hval = self. excitation (Hval + self. threshold_h [h]) self. hidden. insert (h, Hval) # defcount_output (self): self. output = [] foroinrange (0, self. output_num, 1): Oval = 0 forhinrange (len (self. hidden): Oval + = self. hidden [h] * self. weight_ho [h] [o] Oval + = self. threshold_o [o] Oval = round (Oval, self. num_long) self. output. insert (o, Oval) # defcount_error (self): self. error = [] forkeyinrange (len (self. output): self. error. insert (key, self. expectation [key]-self. output [key]) # connection weight feedback training input layer-> hidden layer deftrain_weight_ih (self): foriinrange (len (self. weight_ih): forhinrange (len (self. weight_ih [I]): tmp = 0 foroinrange (0, self. output_num, 1): tmp + = self. weight_ho [h] [o] * self. error [o] self. weight_ih [I] [h] = self. weight_ih [I] [h] + self. learn_rate * self. hidden [h] * (1-self.hidden [h]) * self. input [I] * tmp # connection weight feedback training hidden layer-> output layer deftrain_weight_ho (self): forhinrange (len (self. weight_ho): foroinrange (len (self. weight_ho [h]): self. weight_ho [h] [o] = self. weight_ho [h] [o] + self. learn_rate * self. hidden [h] * self. error [o] # Threshold Value Feedback training hidden layer deftrain_threshold_h (self): forhinrange (len (self. threshold_h): tmp = 0 foroinrange (0, self. output_num, 1): tmp + = self. weight_ho [h] [o] * self. error [o] self. threshold_h [h] = self. threshold_h [h] + self. learn_rate * self. hidden [h] * (1-self.hidden [h]) * tmp # deftrain_threshold_o (self): foroinrange (len (self. threshold_o): self. threshold_o [o] = self. threshold_o [o] + self. error [o] # feedback training deftrain (self): self. train_weight_ih () self. train_weight_ho () self. train_threshold_h () self. train_threshold_o () # normalization function defnormal_num (self, max, min, data): data = (data-min)/(max-min) returndata # Find the maximum and minimum values of the set # --- business part (example) --- # enter two values for the rule to be trained. If the two values are equal, return []. otherwise, return [] deftestFunc (val): if (val [0] = val [1]): return [] else: return [] # construct a neural network model ann = Ann (, 2) # generate training data, randomly generate 5000 groups of [] [] [] [] [] random array data = [] foriinrange (, 1): x = random. randint (0, 1) y = random. randint (0, 1) data. append ([x, y]) # obtain the maximum and minimum values of the training data foriinrange (len (data): forjinrange (len (data [I]): if (I = 0 andj = 0): max = min = data [I] [j] elif (data [I] [j]> max ): max = data [I] [j] elif (data [I] [j]
 
  

 

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.