Simple implementation of DNN's BP algorithm python

Source: Internet
Author: User
Tags shuffle dnn

BP algorithm is the foundation and the most important part of neural network. The loss function needs to be adjusted because the gradient disappears or explodes during the reverse propagation of the error. In the lstm, through the sigmoid to achieve three doors to solve the memory problem, in the process of tensorflow implementation, the need for gradient pruning operations to prevent gradient explosion. RNN's BPTT algorithm also has such problems, so the number of steps over 5 steps, memory effect greatly reduced. The lstm effect can support more than 30 steps, too long. If longer memory is required, or if more context is considered, the lstm output of multiple sentences can be combined as input to another lstm. The following is the upload of a python-implemented common DNN BP algorithm, activated as sigmoid.

Handwriting some scrawled, make use of it, accustomed to manual drawing, personal habits. The following code implementation ideas are the most important: each layer has multiple nodes, the layer and the layer between the one-way link (feedforward network), so the data structure can be designed as one-way linked list. The implementation of the process is typical recursion, recursive call to the last layer after each layer of back_weights feedback to the previous layer, until the end of the deduction. Upload code (code that is not optimized):

Test code:

Import NumPyAs NP
Import Neuralnetworkas NW

if __name__ = =' __main__ ':
Print"Test neural network")

data = Np.array ([[1,0,0,0,0,0,0,0],
[0,1,0,0,0,0,0,0],
[0,0,1,0,0,0,0,0],
[0,0,0,1,0,0,0,0],
[0,0,0,0,1,0,0,0],
[0,0,0,0,0,1,0,0],
[0,0,0,0,0,0,1,0],
[0,0,0,0,0,0,0,1])

Np.set_printoptions (precision=3,suppress=True)


For IInchRange10):
Network = NW. Neuralnetwork ([8, 20, 8])
# the input data is equal to the output data
Network.fit (data, data, Span style= "COLOR: #660099" >learning_rate=0.1, epochs=< Span style= "COLOR: #0000ff" >150)

print (" \n\n ", I, "result")
for Item in data:
print (item, NETWORK.PREDICT (item))
  #NeuralNetWork. py  

# Encoding:utf-8#neuralnetwork.pyimport NumPy as Np;def Logistic (InX): return 1/(1+np.exp (-inx)) def Logistic_derivat Ive (x): Return Logistic (x) * (1-logistic (x)) class Neuron: ' Constructs a neuron unit, each with the following properties: 1.input;2.output;3.back_weight    ; 4.deltas_item;5.weights.        Each neuron unit updates its own weights, multiple neurons form a layer, forming a weights matrix "def __init__ (self,len_input): #输入的初始参数, randomly taking very small values (<0.1) Self.weights = Np.random.random (len_input) * 0.1 #当前实例的输入 self.input = Np.ones (len_input) #对下一层的输出 Value Self.output = 1.0 #误差项 Self.deltas_item = 0.0 # The amount of weight increase last time can be recorded to facilitate the expansion of the following to consider increasing the impulse Self.las       T_weight_add = 0 def calculate_output (self,x): #计算输出值 self.input = x; Self.output = Logistic (Np.dot (self.weights,self.input)) return self.output def get_back_weight (self): #获取反 The value of the feed return self.weights * Self.deltas_item def update_weight (self,target = 0,back_weight = 0,learning_rate=0.        1,layer= "OUTPUT"):#更新权重 if layer = = "OUTPUT": Self.deltas_item = (target-self.output) * Logistic_derivative (Self.input) elif layer = = "HIDDEN": Self.deltas_item = back_weight * logistic_derivative (self.input) delta_we ight = self.input * Self.deltas_item * learning_rate + 0.9 * Self.last_weight_add #添加冲量 self.weights + = Delta_weigh T self.last_weight_add = delta_weightclass netlayer: ' Network layer encapsulation, managing the current network layer's list of neurons ' Def __init__ (self,len_        Node,in_count): ':p Aram Len_node: Number of neurons in the current layer:p Aram In_count: Number of inputs to current layer ' # current level of neuron list        Self.neurons = [Neuron (In_count) for _ in range (Len_node)]; # Record the next layer of reference, convenient recursive operation Self.next_layer = None def calculate_output (self,inx): output = Np.array ([Node.calculat E_output (InX) for node in self.neurons]) if self.next_layer are not None:return Self.next_layer.calculat     E_output (output) return output def get_back_weight (self):   return sum ([Node.get_back_weight () for node in self.neurons]) def update_weight (self,learning_rate,target): l Ayer = "OUTPUT" back_weight = Np.zeros (len (self.neurons)) if Self.next_layer is not none:back_we ight = self.next_layer.update_weight (learning_rate,target) layer = "HIDDEN" for I,node in Enumerate (self . Neurons): Target_item = 0 if Len (target) <= i else target[i] node.update_weight (target = target_ Item,back_weight = Back_weight[i],learning_rate=learning_rate,layer=layer) return Self.get_back_weight () class Neura Lnetwork:def __init__ (self, layers): Self.layers = [] self.construct_network (layers) Pass Def                Construct_network (self, layers): Last_layer = None for I, layer in enumerate (layers): if i = = 0:            Continue cur_layer = Netlayer (layer, layers[i-1]) self.layers.append (Cur_layer) If Last_layer is Not None:last_layer.next_layer = Cur_layer Last_layer = Cur_layer def fit (self, x_train, y_        Train, learning_rate=0.1, epochs=100000, Shuffle=false): "The Training network, by default in order to train Method 1: Training According to the Order of training data Method 2: Randomly select test:p Aram x_train: input data:p aram Y_train: Output data:p Aram learning_rate: Learning rate:p Aram epochs            : Weight update Count:p Aram Shuffle: Random data Training "' Indices = Np.arange (len (X_train)) for _ in range (epochs): If Shuffle:np.random.shuffle (indices) for I in Indices:self.layers[0] . Calculate_output (X_train[i]) self.layers[0].update_weight (Learning_rate, Y_train[i]) pass DEF PR Edict (self, x): Return Self.layers[0].calculate_output (x)

 

DNN's BP algorithm python simple implementation

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.