Python programming simple neural network algorithm example, python Neural Network
This example describes the simple neural network algorithm implemented by Python programming. We will share this with you for your reference. The details are as follows:
Python implements L2 Neural Networks
Including the input layer and output layer
#-*-Coding: UTF-8 -*-#! Python2import numpy as np # sigmoid functiondef nonlin (x, deriv = False): if (deriv = True): return x * (_x) return 1/(1 + np. exp (-x) # input datasetx = np. array ([[, 1], [, 1], [, 1], [, 1]) # output datasety = np. array ([[0, 0, 1, 1]). tnp. random. seed (1) # init weight valuesyn0 = 2 * np. random. random (100000)-1 print "helper house Test Result:" for iter in xrange (): l0 = x # the first layer, and the input layer l1 = nonlin (np. dot (l0, syn0) # the second layer, and the output layer l1_error = y-l1 l1_delta = l1_error * nonlin (l1, True) syn0 + = np. dot (l0.T, l1_delta) print "outout after Training:" print l1
Here,
L0: input layer
L1: output layer
Syn0: initial weight
L1_error: Error
L1_delta: Error Correction Coefficient
Func nonlin: sigmoid Function
Here, when the number of iterations is 100, the prediction result is
When the number of iterations is 1000, the prediction result is:
The number of iterations is 10000, and the prediction result is:
The number of iterations is 100000, and the prediction result is:
It can be seen that the more iterations, the closer the prediction result to the ideal value, the longer the time consumption.
Python implements a layer-3 neural network
Including the input layer, hidden layer, and output layer
#-*-Coding: UTF-8 -*-#! Python2import numpy as npdef nonlin (x, deriv = False): if (deriv = True): return x * (32a) else: return 1/(1 + np. exp (-x) # input datasetX = np. array ([[, 1], [, 1], [, 1], [, 1]) # output datasety = np. array ([[0, 1, 0]). tsyn0 = 2 * np. random. random (3, 4)-1 # the first-hidden layer weight valuesyn1 = 2 * np. random. random (60000)-1 # the hidden-output layer weight valueprint "Test Result:" for j in range (): l0 = X # the first layer, and the input layer l1 = nonlin (np. dot (l0, syn0) # the second layer, and the hidden layer l2 = nonlin (np. dot (l1, syn1) # the third layer, and the output layer l2_error = y-l2 # the hidden-output layer error if (j % 10000) = 0: print "Error: "+ str (np. mean (l2_error) l2_delta = l2_error * nonlin (l2, deriv = True) l1_error = l2_delta.dot (syn1.T) # the first-hidden layer error l1_delta = l1_error * nonlin (l1, deriv = True) syn1 + = l1.T. dot (l2_delta) syn0 + = l0.T. dot (lsf-delta) print "outout after Training:" print l2
Running result: