Python implementation of deep neural network framework

Source: Internet
Author: User

Overview This demo is very suitable for beginners AI and deep learning students, from the most basic knowledge, as long as there is a little bit of advanced mathematics, statistics, matrix of relevant knowledge, I believe you can see clearly. The program is written without the use of any third-party deep Learning Library, starting at the bottom. First, this paper introduces what is neural network, the characteristics of neural network, BP algorithm in neural network, training method of neural network, activation function of neural network, loss function, weight initialization method, regularization mechanism of weight value, and a series of knowledge. Second, based on this, using the most basic Python syntax to implement a neural network framework, using this neural network framework, can build their own deep neural network, and we can also add other functions according to their own needs. In order to facilitate people to read the source code and use, contains a simple document. Thirdly, based on the framework, we build a deep neural network to realize the classification task of handwritten fonts. Detailed code Download: http://www.demodashi.com/demo/13010.html Introduction of basic knowledge

Neural network basic knowledge of the introduction part contains a lot of formulas and graphs, using the Web site of the online editor, implementation is inadequate. I wrote a 13-page Word document, put in the understanding of the pressure pack, everyone download to see, I recorded a video, we can roughly browse a bit.

Two, Python code implementation of neural network framework

If you do not understand the neural network, before looking at this part of the content, be sure to master the first part of the basic content, otherwise, you will not understand the source code, because a lot of code is based on the formula can be written out.

Here, we put a deep neural network can be divided into a number of layers, including the input layer of data, the full join layer, activation function layer, loss function layer, and can also join the dropout layer. If you want to build a convolutional neural network, you can also add convolution layer, pool layer and so on. This demo implementation of the neural network framework is based on a hierarchical structure, the implementation of each layer, we can according to their own needs, to build their own neural network.

The core modules and functions of this framework include:

Layer module: It defines the functions of the layers of the neural network, including the data input layer, the fully connected layer, the activation function layer, the loss function layer and so on.

Function_for_layer module: It defines the activation function, loss function, weight initialization method and so on.

Update_method module: Update mechanism of learning rate, renewal mechanism of weight value (such as batch random gradient descent method), etc.

NET module: You can define your own neural network here according to your own needs.

Figure 1 shows the neural network framework.

In addition, in the upload of the compressed package, there is a description of the neural network framework of the document, you can read the document according to reading the source code. I recorded a small video, you can browse.

Layer module:

Data Entry layer:

Class Data:def __init__ (self): self.data_sample = 0 Self.data_label = 0 self.output_sample = 0 Self.out    Put_label = 0 Self.point = 0 #用于记住下一次pull数据的地方;        def get_data (self, sample, label): # Sample each row represents a sample data, and each row of a label represents a label for a sample.        Self.data_sample = Sample Self.data_label = Label Def shuffle (self): # used to scramble the order;  Random_sequence = Random.sample (Np.arange (self.data_sample.shape[0]), self.data_sample.shape[0]) Self.data_sample = Self.data_sample[random_sequence] Self.data_label = self.data_label[random_sequence] def pull_data (self): # Push data to output start = Self.point End = start + batch_size Output_index = Np.arange (start, end) if en D > self.data_sample.shape[0]: End = end-self.data_sample.shape[0] Output_index = Np.append (NP . Arange (Start, Self.data_sample.shape[0]), Np.arange (0, end)) Self.output_sample = Self.data_sample[output_index] Self.out Put_label = Self.data_label[output_index] Self.point = end% self.data_sample.shape[0]

Full join layer:

Class Fully_connected_layer:def __init__ (self, num_neuron_inputs, num_neuron_outputs): Self.num_neuron_inputs = Num_neuron_inputs self.num_neuron_outputs = num_neuron_outputs self.inputs = Np.zeros (Batch_size, Num_neur on_inputs)) Self.outputs = Np.zeros ((batch_size, num_neuron_outputs)) Self.weights = Np.zeros ((num_neuron_in Puts, num_neuron_outputs)) Self.bias = Np.zeros (num_neuron_outputs) self.weights_previous_direction = Np.zer Os ((Num_neuron_inputs, num_neuron_outputs)) self.bias_previous_direction = Np.zeros (num_neuron_outputs) Self . grad_weights = Np.zeros ((batch_size, Num_neuron_inputs, num_neuron_outputs)) Self.grad_bias = Np.zeros ((batch_size , num_neuron_outputs)) self.grad_inputs = Np.zeros ((batch_size, num_neuron_inputs)) self.grad_outputs = Np.z Eros ((batch_size,num_neuron_outputs)) def initialize_weights (self): Self.weights = Ffl.xavier (self.num_neuron_i Nputs, Self.num_neuron_outputs) # In the process of forward propagation, used to obtain input; def get_inputs_for_forward (self, inputs): Self.inputs = inputs def forward (self): self.outputs = Self.inp    UTS Dot (self.weights) + Np.tile (Self.bias, (batch_size, 1)) # in the process of reverse propagation, used to obtain input; def get_inputs_for_backward (self, grad_outputs): self.grad_outputs = grad_outputs def backward (self): #求        The gradient of the weighted value, the result is a three-dimensional array, because there are multiple samples; For I in Np.arange (batch_size): self.grad_weights[i,:] = Np.tile (self.inputs[i,:], (1, 1)).                                      T. Dot (Np.tile (self.grad_outputs[i,:], (1, 1)) +        Self.weights * Weights_decay #求求偏置的梯度;        Self.grad_bias = self.grad_outputs #求 The gradient of the input;        self.grad_inputs = self.grad_outputs dot (self.weights.t) def update (self): #权值与偏置的更新; Grad_weights_average = Np.mean (self.grad_weights, 0) grad_bias_average = Np.mean (Self.grad_bias, 0) (self.we Ights, self.weights_previous_direction) = Update_function (Self.weights, Grad_we        Ights_average, Self.weights_previous_direction)                                                                        (Self.bias, self.bias_previous_direction) = Update_function (Self.bias, Grad_bias_average, self.b Ias_previous_direction)

Activating the function layer:

Class Activation_layer:def __init__ (Self, activation_function_name): if activation_function_name = = ' sigmoid ': Self.activation_function = ffl.sigmoid self.der_activation_function = ffl.der_sigmoid elif ac  Tivation_function_name = = ' Tanh ': self.activation_function = Ffl.tanh self.der_activation_function = Ffl.der_tanh elif Activation_function_name = = ' Relu ': self.activation_function = Ffl.relu SE lf.der_activation_function = ffl.der_relu else:print ' input activation function is wrong ah ' self.inputs = 0 Self.ou tputs = 0 self.grad_inputs = 0 self.grad_outputs = 0 def get_inputs_for_forward (self, inputs): SEL F.inputs = Inputs def forward (self): #需要激活函数 self.outputs = Self.activation_function (self.inputs) def Get_inputs_for_backward (self, grad_outputs): self.grad_outputs = grad_outputs def backward (self): #需要激活函 The derivative of the number self.grad_inputs = Self.grad_outputs * Self.der_activation_function (self.inputs) 

Loss function Layer:

Class Loss_layer:def __init__ (Self, loss_function_name): self.inputs = 0 Self.loss = 0 self.accur acy = 0 Self.label = 0 self.grad_inputs = 0 if Loss_function_name = = ' Softmaxwithloss ': SE Lf.loss_function =ffl.softmaxwithloss self.der_loss_function =ffl.der_softmaxwithloss elif loss_function _name = = ' Leastsquareerror ': self.loss_function =ffl.least_square_error self.der_loss_function =FFL. Der_least_square_error else:print ' input loss function No, don't go on, re-enter ' Def get_label_for_loss (self, label ): Self.label = label def get_inputs_for_loss (self, inputs): Self.inputs = Inputs def Compute_loss_and _accuracy (self): #计算正确率 if_equal = Np.argmax (self.inputs, 1) = = Np.argmax (Self.label, 1) Self.accurac y = np.sum (if_equal)/batch_size #计算训练误差 Self.loss = self.loss_function (Self.inputs, Self.label) def Co   Mpute_gradient (self):     Self.grad_inputs = Self.der_loss_function (self.inputs, Self.label) 

Function_for_layer module:

Definition of the activation function:

# sigmoid function and its derivative definition def sigmoid (x):    return 1/(1 + np.exp (x)) def der_sigmoid (x):    return sigmoid (x) * (1-sigmoid ( x)                                                                                                                                         # Tanh function and its derivative definition def tanh (x):    return (NP.EXP (x)-Np.exp (x))/(Np.exp (x) + np.exp (x)) def der_tanh (x):    return 1-tanh (x) * Tanh (x) # Relu function and its derivative definition def relu (x):    temp = np.zeros_like (x)    If_bigger_zero = (x > Temp) 
   
    return x * if_bigger_zerodef der_relu (x):    temp = np.zeros_like (x)    If_bigger_equal_zero = (x >= temp)          # The derivative at 0 is set to 1    return If_bigger_equal_zero * np.ones_like (x)
   

Definition of loss function:

# Softmaxwithloss function and its derivative definition def softmaxwithloss (inputs, label):    Temp1 = np.exp (inputs)    probability = TEMP1/( Np.tile (Np.sum (Temp1, 1), (inputs.shape[1], 1)). T    Temp3 = Np.argmax (label, 1)   #纵坐标    temp4 = [Probability[i, j] for (I, j) in Zip (Np.arange (label.shape[0]), tem P3)]    loss =-1 * Np.mean (Np.log (temp4))    return lossdef Der_softmaxwithloss (inputs, label):    Temp1 = Np.exp ( Inputs)    Temp2 = Np.sum (Temp1, 1)  #它得到的是一维的向量;    probability = Temp1/(Np.tile (Temp2, (inputs.shape[1], 1)). T    gradient = Probability-label    return gradient

Method of weight initialization:

# Xavier Initialization Method def Xavier (Num_neuron_inputs, num_neuron_outputs):    Temp1 =  np.sqrt (6)/np.sqrt (Num_neuron_ inputs+ num_neuron_outputs + 1)    weights = Stats.uniform.rvs (-TEMP1, 2 * temp1, (num_neuron_inputs, Num_neuron_ Outputs))    return weights

Update_method module:

Update mechanism of learning rate:

#定义一些需要的全局变量momentum = 0.9base_lr  = 0         # in the construction of net is to initialize it; iteration = 1       # It often needs to be modified during training ###################### #####      Defining the change mechanism function of learning rate     ##################################### Inv Method                                                                             def inv (gamma = 0.0005, power = 0.75):    if iteration = =-1:         assert False, ' need to change the value of iteration in the Update_method module during the training process '    return BASE_LR * Np.power ((1 + Gamma * iteration),-power) # fixed method def fixed ():    return BASE_LR

Batch random gradient descent method:

# volume-Based stochastic gradient descent method def batch_gradient_descent (weights, Grad_weights, previous_direction):              lr = INV ()    direction = Momentum * Previous_direction + LR * grad_weights    weights_now = weights-direction    return (Weights_now, directi On

NET module:

For example, define a four-layer neural network:

#搭建一个四层的神经网络;        Self.inputs_train = Layer.data ()                 # The input layer of the training sample        self.inputs_test = Layer.data ()                  # Test Sample input layer        SELF.FC1 = Layer.fully_connected_layer (784, self.ac1)         = Layer.activation_layer (' tanh ')        SELF.FC2 = Layer.fully_ Connected_layer (         self.ac2) = Layer.activation_layer (' tanh ')        self.fc3 = Layer.fully_connected_layer (50 , ten)         Self.loss = Layer.loss_layer (' Softmaxwithloss ')

Define some other functional interfaces of the network, such as loading training samples and test samples:

    def load_sample_and_label_train (self, sample, label):        self.inputs_train.get_data (sample, label)    def Load_ Sample_and_label_test (self, sample, label):                                                                                         self.inputs_test.get_data (sample, label)

Define the initialization interface for the network:

    def initial (self):        self.fc1.initialize_weights ()        self.fc2.initialize_weights ()        self.fc3.initialize_ Weights ()

Define the forward propagation and reverse propagation of the network during the training process:

    def forward_train (self): Self.inputs_train.pull_data () Self.fc1.get_inputs_for_forward (Self.inpu ts_train.outputs) Self.fc1.forward () Self.ac1.get_inputs_for_forward (self.fc1.outputs) Self.ac1.forwa RD () Self.fc2.get_inputs_for_forward (self.ac1.outputs) Self.fc2.forward () self.ac2.get_inputs _for_forward (self.fc2.outputs) Self.ac2.forward () Self.fc3.get_inputs_for_forward (self.ac2.outputs) s Elf.fc3.forward () Self.loss.get_inputs_for_loss (self.fc3.outputs) Self.loss.get_label_for_loss (self.inputs_t Rain.output_label) self.loss.compute_loss_and_accuracy () def backward_train (self): Self.loss.compute_grad Ient () Self.fc3.get_inputs_for_backward (self.loss.grad_inputs) Self.fc3.backward () self.ac2.get_input S_for_backward (self.fc3.grad_inputs) Self.ac2.backward () Self.fc2.get_inputs_for_backward (Self.ac2.grad_inpu TS) Self.fc2.backWard () Self.ac1.get_inputs_for_backward (self.fc2.grad_inputs) Self.ac1.backward () self.fc1.get_inputs _for_backward (self.ac1.grad_inputs) Self.fc1.backward ()

Define the forward propagation of the network during the testing process:

    def forward_test (self):        self.inputs_test.pull_data ()                Self.fc1.get_inputs_for_forward (self.inputs_ test.outputs)        Self.fc1.forward ()        Self.ac1.get_inputs_for_forward (self.fc1.outputs)        Self.ac1.forward ()                Self.fc2.get_inputs_for_forward (self.ac1.outputs)        Self.fc2.forward ()        Self.ac2.get_inputs_for_forward (self.fc2.outputs)        self.ac2.forward ()        Self.fc3.get_inputs_for_forward ( self.ac2.outputs)        Self.fc3.forward ()        Self.loss.get_inputs_for_loss (self.fc3.outputs)        Self.loss.get_label_for_loss (Self.inputs_test.output_label)        self.loss.compute_loss_and_accuracy ()

To define the update of weights and gradients:

    def update (self):        self.fc1.update ()        self.fc2.update ()        self.fc3.update ()
Iii. using neural networks defined in the net module to recognize handwritten fonts

In the second part of the net module, we define a 784*50*50*10 neural network that trains the neural network to recognize handwritten numerals.

Handwriting Digital Introduction: From Yann LeCun and other people maintain a handwritten number set, training sample includes 60,000, test sample is 10,000, can be downloaded on the official website http://yann.lecun.com/exdb/mnist/index.html. But the official website of the data for the binary data, inconvenient to use, but we do not need but the heart, I have converted it to MATLAB commonly used in the. mat format data, download the compressed package/demo/data.mat to view. The handwriting font looks like this:

Write a train.py file, use it to train the neural network and test it.

# import data = Scipy.io.loadmat (' Data.mat ') Train_label = data[' Train_label ']train_data = data[' train_data ']test_label = data[' Test_label ']test_data = data[' Test_data '] #一些相关的重要参数num_train = 800LR = 0.1weight_decay = 0.001train_batch_size = 100test_batch_size = 10000# Create the network and load the sample solver = Net.net (Train_batch_size, LR, Weight_decay) Solver.load_sample_and_label _train (Train_data, Train_label) solver.load_sample_and_label_test (Test_data, Test_label) # initialization weights; Solver.initial () # For storing training errors Train_error = Np.zeros (num_train) # Training for I in range (num_train):    print ', I, ' Iterations '    net.layer.update_ Method.iteration  = i    solver.forward_train ()    solver.backward_train ()    solver.update ()    Train_ Error[i] = Solver.loss.lossplt.plot (train_error) plt.show () #测试solver. Turn_to_test (test_batch_size) Solver.forward_ Test () print ' sample recognition rate is: ', solver.loss.accuracy

Run the train.py program and get:

In the course of network training, the drop curve of training error is:

The identification rate of the test sample is:

Of course, we can adjust the recognition rate by adjusting the parameters.

Iv. List of project documents

Code Download: http://www.demodashi.com/demo/13010.html Note: This copyright belongs to the author, published by the demo master, refused to reprint, reprint needs the author authorization

Python implementation of deep neural network framework

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.