I think the first program of most programmers should be "Hello World", in the field of deep learning, this "Hello" program is a handwritten font recognition program.
This time we analyzed the handwritten font recognition program in detail so that we could build a basic concept for deep learning.
1. Initialize the weights and biases matrix to construct the neural network architecture
Import NumPy as NP
Class Network ():
def __init__ (self, sizes):
Self.num_layers = Len (sizes)
Self.sizes = Sizes
Self.biases = [Np.random.randn (y,1) for y in Sizes[1:]]
Self.weights = [Np.random.randn (y,x) for x, y in Zip (sizes (:-1), sizes (1:))]
When instantiating a neural network, initialize the weights and biases of the matrix, for example
network0 = Network ([784, 30, 10])
Can initialize a 3-layer neural network, the number of neurons in each layer is 784, 30, 10
2. How to reverse propagate the gradient of the computational cost function?
This process can be roughly summed up as follows:
(1) Forward propagation, obtaining the weighted output and activation factor of each neuron (a)
(2) Calculate the error of the output layer
(3) Reverse propagation calculates the error and gradient of each layer
The code implemented in Python is as follows:
def backprop (self, x, y):
Delta_w = [Np.zeros (w.shape) for W in Self.weights]
Delta_b = [Np.zeros (b.shape) for B in self.biases]
#计算每个神经元的带权输入z及激活值
ZS = []
Activation = X
activations = [x]
For b,w in Zip (self.biases, self.weights):
z = Np.dot (w, activation) + b
Zs.append (z)
Activation = SIGMOD (z)
Activations.append (activation)
#计算输出层误差 (two-time cost function is used here)
Delta = (Activations[-1]-y) * Sigmod_prime (Zs[-1])
Delta_w[-1] = Np.dot (Delta, Activations[-2].transpose ())
DELTA_B[-1] = Delta
#反向传播
For L in Xrange (2, self.num_layers):
Delta = Np.dot (Delta_w[-l+1].transpose (), Delta) *sigmod_prime (Zs[-l])
DELTA_W[-L] = Np.dot (Delta, Activations[-l-1].transpose ())
DELTA_B[-L] = Delta
Return Delta_w, Delta_b
3. How to gradient down, update weights and biases?
The increment of update weights and offsets is obtained by reverse propagation, and the gradient is further updated and the gradients are decreased.
def update_mini_batch (self, Mini_batch, ETA):
Delta_w = [Np.zeros (w.shape) for W in Self.weights]
Delta_b = [Np.zeros (b.shape) for B in self.biases]
For x, y in Mini_batch:
(Here for all samples in a small batch, apply reverse propagation, accumulate weights and bias changes)
delta_w_p, delta_b_p = Self.backprop (x, y)
Delta_w = [Dt_w + dt_w_p for dt_w,dt_w_p in Zip (Delta_w, delta_w_p)]
Delta_b = [Dt_b + dt_b_p for dt_b,dt_b_p in Zip (Delta_b, delta_b_p)]
Self.weights = [W (Eta/len (Mini_batch) *NW) for W,NW in Zip (Self.weights, delta_w)]
Self.biases = [B (Eta/len (Mini_batch) *nb) for B,NB in Zip (self.biases, delta_b)]
def SGD (self, epochs, Training_data, Mini_batch_size,eta, Test_data=none):
If Test_data:
n_tests = Len (tast_data)
N_training_data = Len (training_data)
For I in xrange (0, epochs):
Random.shuffle (Training_data)
Mini_batches = [Training_data[k:k+mini_batch_size]
For K in xrange (0, N_training_data, mini_batch_size)
]
For Mini_batch in Mini_batches:
Self.update_mini_batch (Mini_batch, ETA)
Deep Learning---Handwritten font recognition program analysis (python)