Through the previous theoretical study, as well as the analysis of the relationship between error and weight, derive the formula to practice doing a own neural network through Python3.5:
Follow the python introduction in the book and introduce the Zeros () in the NumPy:
Import= Numpy.zeros ([3,2= 1a[] = 2a[2,1] = 5print(a)
The result is:
[1.0.]
[0.2.]
[0.5.]
You can use this method to generate a matrix.
Next, use Python to build the skeleton of the neural network:
Build a basic model:
ImportNumPyclassneuralnetwork:def __init__(self, inputnodes, hiddennodes, Outputnodes, learningrate): Self.inodes=inputnodes self.hnodes=hiddennodes self.onodes=outputnodes SELF.LR=learningrate#generate the link weights between the range of-1 to +1Self.wih = numpy.random.normal (0.0, pow (self.hnodes, 0.5) , (Self.hnodes, self.inodes)) self.who= Numpy.random.normal (0.0, pow (self.onodes, 0.5), (Self.onodes, self.hnodes))Pass defTrain (self):Pass defquery (self):Pass#TestInput_nodes = 3Hidden_nodes= 3Output_nodes= 3learning_rate= 0.5#Create instance of neural networkn = neuralnetwork (Input_nodes, Hidden_nodes, Output_nodes, Learning_rate)
The Cunstructor has a start node, a hidden node, and an output node, including learning rate.
Each link weight uses the form of a numpy random number to initialize a series of weight and then training data to find the error and reverse modify
Learn make your own neural network record (ii)