Python-based three-layer BP neural network algorithm example, pythonbp

Source: Internet
Author: User

Python-based three-layer BP neural network algorithm example, pythonbp

This example describes the three-layer BP neural network algorithm implemented by Python. We will share this with you for your reference. The details are as follows:

This is a very nice python implementation of a layer-3 back-propagation neural network. Next I am going to try to change it to a multi-layer back-propagation neural network.

The following shows how to run the demo function. You will find that the prediction result is amazing!

Tip:When running the demo function, you can try to change the number of nodes in the hidden layer to see if the number of nodes has increased and the prediction accuracy has improved.

Import mathimport randomimport stringrandom. seed (0) # generate the random number def rand (a, B) in the interval [a, B): return (B-a) * random. random () + a # generate a matrix of I * J. The default zero matrix (of course, NumPy acceleration is also available) def makeMatrix (I, J, fill = 0.0 ): m = [] for I in range (I): m. append ([fill] * J) return m # function sigmoid. tanh is used here, because it looks better than standard 1/(1 + e ^-x) pretty def sigmoid (x): return math. tanh (x) # derived function of the sigmoid function, in order to get the output (I .e.: y) def dsigmoid (y): return 1.0-y ** 2 class NN :''' Layer-3 Back Propagation Neural Network ''' def _ init _ (self, ni, nh, no): # input layer, hidden layer, and output layer node (number) self. ni = ni + 1 # Add a deviation node self. nh = nh self. no = no # activate all nodes (vectors) of the neural network self. ai = [1, 1.0] * self. ni self. ah = [1, 1.0] * self. nh self. ao = [1.0] * self. no # create a weight (matrix) self. wi = makeMatrix (self. ni, self. nh) self. wo = makeMatrix (self. nh, self. no) # set to a random value for I in range (self. ni): for j in range (self. nh): self. wi [I] [j] = rand (-0.2, 0.2) for j in Range (self. nh): for k in range (self. no): self. wo [j] [k] = rand (-2.0, 2.0) # Finally, the momentum factor (matrix) self is established. ci = makeMatrix (self. ni, self. nh) self. co = makeMatrix (self. nh, self. no) def update (self, inputs): if len (inputs )! = Self. ni-1: raise ValueError ('does not match the number of input layer nodes! ') # Activate the input layer for I in range (self. ni-1): # self. ai [I] = sigmoid (inputs [I]) self. ai [I] = inputs [I] # activate the hidden layer for j in range (self. nh): sum = 0.0 for I in range (self. ni): sum = sum + self. ai [I] * self. wi [I] [j] self. ah [j] = sigmoid (sum) # activate the output layer for k in range (self. no): sum = 0.0 for j in range (self. nh): sum = sum + self. ah [j] * self. wo [j] [k] self. ao [k] = sigmoid (sum) return self. ao [:] def backPropagate (self, Targets, N, M): '''reverse propagation ''' if len (targets )! = Self. no: raise ValueError ('does not match the number of nodes in the output layer! ') # Calculate the error output_deltas = [0.0] * self. no for k in range (self. no): error = targets [k]-self. ao [k] output_deltas [k] = dsigmoid (self. ao [k]) * error # Calculate the hidden layer error hidden_deltas = [0.0] * self. nh for j in range (self. nh): error = 0.0 for k in range (self. no): error = error + output_deltas [k] * self. wo [j] [k] hidden_deltas [j] = dsigmoid (self. ah [j]) * error # update the output layer weight for j in range (self. nh): for k in range (self. no): change = output_deltas [k] * self. ah [j] self. wo [j] [k] = self. wo [j] [k] + N * change + M * self. co [j] [k] self. co [j] [k] = change # print (N * change, M * self. co [j] [k]) # update the input layer weight for I in range (self. ni): for j in range (self. nh): change = hidden_deltas [j] * self. ai [I] self. wi [I] [j] = self. wi [I] [j] + N * change + M * self. ci [I] [j] self. ci [I] [j] = change # Calculation error = 0.0 for k in range (len (targets): error = error + 0.5 * (targets [k]-self. ao [k]) ** 2 return error def test (self, patterns): for p in patterns: print (p [0], '->', self. update (p [0]) def weights (self): print ('input layer weight: ') for I in range (self. ni): print (self. wi [I]) print ('output layer weight: ') for j in range (self. nh): print (self. wo [j]) def train (self, patterns, iterations = 1000, N = 0.5, M = 0.1): # N: learning rate # M: momentum factor (momentum factor) for I in range (iterations): error = 0.0 for p in patterns: inputs = p [0] targets = p [1] self. update (inputs) error = error + self. backPropagate (targets, N, M) if I % 100 = 0: print ('error % -. 5f' % error) def demo (): # A demo: Teach neural networks to learn logical differences or (XOR) ------------ use your own data. Try pat = [[[0, 0], [0], [[], [1], [[], [1], [], [0] # create a neural network: The input layer has two nodes, the hidden layer has two nodes, and the output layer has one node n = NN (2, 2, 1) # Train it in some modes n. train (pat) # test the training results (don't be surprised) n. test (pat) # Check the trained weights (of course, you can consider persisting the trained weights) # n. weights () if _ name _ = '_ main _': demo ()

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.