Originally intended to begin the translation of the calculation of the part, the results of the last article just finished, mxnet upgraded the tutorial document (not hurt AH), updated the previous in the handwritten numeral recognition example of a detailed tutorial. Then this article on the Times, to the just updated this tutorial translated. Because the current picture can not upload to the blog, the relevant pictures can be viewed from the original site: handwritten Digit recognition.
This tutorial guides you through a sample application of computer vision classification: Using artificial neural networks to recognize handwritten digits
We first need to get mnist data, which is a commonly used dataset for handwritten digit recognition. Each image in the dataset is scaled to a 28*28 pixel-size gray value between 0 and 254. The following code downloads and loads the image and the label corresponding to the image to NumPy.
Image = Np.fromstring (Fimg.read (), dtype=np.uint8). Reshape (Len (label), rows, cols)
Import mxnet as MX
defto4d (IMG):
return Img.reshape (img.shape[0],1,28,28). Astype (np.float32)/255
batch_size=100
Train_iter= Mx.io.NDArrayIter (to4d (train_img), Train_lbl, Batch_size, Shuffle=true)
Val_iter= Mx.io.NDArrayIter (to4d (val_img), VAL_LBL, Batch_size)
multi-layer perception machine
A multi-layer perceptron contains multiple fully connected layers. For the fully connected layer, suppose the size of the input matrix X is n*m, the size of the output matrix Y is n*k, where k is often referred to as the hidden size. This layer has two parameters, M*n's weight matrix W and m*1 's offset vector b. The output is derived from the following formula:
Y =wx + b
The output of the fully connected layer is usually entered into a convolution layer for pixel-by-elemental-wise operations. One of the most famous functions is the sigmoid function: f (x) = 1/(1+e^ (-X)). And now people are using a simpler function called Relu: f (x) = max (0,x).
The last full join layer usually has the same hidden size as the number of categories in the dataset. Finally, we press into a SOFTMAX layer, which maps the input to a value that indicates probability. Also assume that the input x size is n*m,x_i to line I. The output of line I is:
The definition of multilayer perceptron is very simple in mxnet, as shown below.
# Create A place holder variable for the input data
Data= mx.sym.Variable (' data ')
# into 2-d (batch_size, Num_channel*width*height)
Data= Mx.sym.Flatten (Data=data)
# The fully-connected layer
FC1 = mx.sym.FullyConnected (data=data, name= ' fc1 ', num_hidden=128)
# Apply Relu to the output of the the ' the ' the ' the ' the ' the '
Act1= mx.sym.Activation (DATA=FC1, name= ' relu1 ', act_type= "Relu")
# The second fully-connected layer and the according activation function
FC2 = mx.sym.FullyConnected (Data=act1, name= ' FC2 ', Num_hidden =64)
Act2= mx.sym.Activation (DATA=FC2, name= ' relu2 ', act_type= "Relu")
# The Thrid fully-connected layer, note this hidden size should is the number of unique which
FC3 = mx.sym.FullyConnected (data=act2, name= ' fc3 ', num_hidden=10)
# The Softmax and loss layer
MLP = Mx.sym.SoftmaxOutput (DATA=FC3, name= ' Softmax ')
# We Visualize the network structure with output size (the batch_size is ignored.)
shape= {"Data": (Batch_size, 1,28,28)}
Mx.viz.plot_network (SYMBOL=MLP, Shape=shape)
Now the neural network definition and data iterator are all ready. We can start training:
Import logging
Logging.getlogger (). Setlevel (Logging. DEBUG)
Model= Mx.model.FeedForward (
Symbol = MLP, # network structure
)
Model.fit (
X=train_iter, # Training data
eval_data=val_iter,# Validation Data
Batch_end_callback = Mx.callback.Speedometer (batch_size,200) # Output progress for data batches
)
INFO:root:Start training with [CPU (0)]
INFO:ROOT:EPOCH[0] Batch [[] speed:26279.17 samples/sec train-accuracy=0.111550
Info:root:epoch[0] Batch [to] speed:27424.98 samples/sec train-accuracy=0.111000
Info:root:epoch[0] Batch [speed:27094.87] samples/sec train-accuracy=0.133200
Info:root:epoch[0] Resetting Data iterator
Info:root:epoch[0] Time cost=2.320
Info:root:epoch[0] validation-accuracy=0.276800
INFO:ROOT:EPOCH[1] Batch [[] speed:17739.48 samples/sec train-accuracy=0.412650
INFO:ROOT:EPOCH[1] Batch [to] speed:18869.69 samples/sec train-accuracy=0.753500
INFO:ROOT:EPOCH[1] Batch [speed:25618.04] samples/sec train-accuracy=0.828750
INFO:ROOT:EPOCH[1] Resetting Data iterator
INFO:ROOT:EPOCH[1] Time cost=2.988
INFO:ROOT:EPOCH[1] validation-accuracy=0.854400
INFO:ROOT:EPOCH[2] Batch [[] speed:21532.09 samples/sec train-accuracy=0.859750
INFO:ROOT:EPOCH[2] Batch [to] speed:27919.08 samples/sec train-accuracy=0.888700
INFO:ROOT:EPOCH[2] Batch [speed:26810.95] samples/sec train-accuracy=0.905550
INFO:ROOT:EPOCH[2] Resetting Data iterator
INFO:ROOT:EPOCH[2] Time cost=2.408
INFO:ROOT:EPOCH[2] Validation-accuracy=0.916300
INFO:ROOT:EPOCH[3] Batch [[] speed:28097.98 samples/sec train-accuracy=0.917300
INFO:ROOT:EPOCH[3] Batch [to] speed:27490.20 samples/sec train-accuracy=0.925850
INFO:ROOT:EPOCH[3] Batch [speed:27937.45] samples/sec train-accuracy=0.934900
INFO:ROOT:EPOCH[3] Resetting Data iterator
INFO:ROOT:EPOCH[3] Time cost=2.167
INFO:ROOT:EPOCH[3] validation-accuracy=0.938400
INFO:ROOT:EPOCH[4] Batch [[] speed:26948.04 samples/sec train-accuracy=0.942450
INFO:ROOT:EPOCH[4] Batch [to] speed:24250.66 samples/sec train-accuracy=0.943200
INFO:ROOT:EPOCH[4] Batch [speed:22772.67] samples/sec train-accuracy=0.951550
INFO:ROOT:EPOCH[4] Resetting Data iterator
INFO:ROOT:EPOCH[4] Time cost=2.456
INFO:ROOT:EPOCH[4] Validation-accuracy=0.951500
INFO:ROOT:EPOCH[5] Batch [[] speed:27313.59 samples/sec train-accuracy=0.955500
INFO:ROOT:EPOCH[5] Batch [to] speed:28061.48 samples/sec train-accuracy=0.955100
INFO:ROOT:EPOCH[5] Batch [speed:26730.32] samples/sec train-accuracy=0.960500
INFO:ROOT:EPOCH[5] Resetting Data iterator
INFO:ROOT:EPOCH[5] Time cost=2.206
INFO:ROOT:EPOCH[5] Validation-accuracy=0.956300
INFO:ROOT:EPOCH[6] Batch [[] speed:28440.23 samples/sec train-accuracy=0.962700
INFO:ROOT:EPOCH[6] Batch [to] speed:28832.82 samples/sec train-accuracy=0.962700
INFO:ROOT:EPOCH[6] Batch [speed:27814.78] samples/sec train-accuracy=0.967150
INFO:ROOT:EPOCH[6] Resetting Data iterator
INFO:ROOT:EPOCH[6] Time cost=2.131
INFO:ROOT:EPOCH[6] Validation-accuracy=0.960300
INFO:ROOT:EPOCH[7] Batch [[] speed:20942.23 samples/sec train-accuracy=0.967550
INFO:ROOT:EPOCH[7] Batch [to] speed:22264.85 samples/sec train-accuracy=0.967750
INFO:ROOT:EPOCH[7] Batch [speed:21294.69] samples/sec train-accuracy=0.971500
INFO:ROOT:EPOCH[7] Resetting Data iterator
INFO:ROOT:EPOCH[7] Time cost=2.805
INFO:ROOT:EPOCH[7] validation-accuracy=0.961400
INFO:ROOT:EPOCH[8] Batch [[] speed:17870.55 samples/sec train-accuracy=0.972550
INFO:ROOT:EPOCH[8] Batch [to] speed:11526.75 samples/sec train-accuracy=0.971600
INFO:ROOT:EPOCH[8] Batch [600]