Caffe Mnist Instance--lenet_train_test.prototxt network configuration detailed

Source: Internet
Author: User

1.mnist instances

# #1. The data download obtains mnist packets and executes the./data/mnist/get_mnist.sh script in the Caffe root directory. The get_mnist.sh script first downloads the sample library and unzip it to get four files.

2. Generate Lmdb

After successfully extracting the downloaded sample library, then execute the./examples/mnist/create_mnist.sh. The create_mnist.sh script first takes advantage of the Convert_mnist_data.bin tool in the caffe-master/build/examples/mnist/directory and will Mnist Data is converted to the Caffe available Lmdb format file, and the resulting mnist-train-lmdb and mnist-test-lmdb two files are placed under the Caffe-master/example/mnist directory.

3. Network Configuration

The Lenet network is defined in the./examples/mnist/lenet_train_test.prototxt file.

Name"LeNet"Layer {Name"Mnist"//Enter the name of the layer MnistType"Data"//The input layer type is dataLayerTop"Data"//Next connection data in this layerLayers and label blobsSpaceTop"Label"Include {Phase:train//Training Stage}Transform_param {Scale0.00390625//Input picture pixels to [0,1].1Divide by 256.Is 0.00390625}Data_param {Source"Examples/mnist/mnist_train_lmdb"//From Mnist_train_lmdbRead into data inBatch_size:64BatchSize is 64, one training 64Bar dataBackend:lmdb}}Layer {Name"Mnist"//Enter the name of the layer MnistType"Data"//The input layer type is dataLayerTop"Data"//Next connection data in this layerLayers and label blobsSpaceTop"Label"Include {Phase:test//Test phase}Transform_param {Scale0.00390625//Input picture pixels to [0,1].1Divide by 256.Is 0.00390625}Data_param {Source"Examples/mnist/mnist_test_lmdb"//From Mnist_test_lmdbRead into data inBatch_size:100BatchSize is 100, one training 100Bar dataBackend:lmdb}}Layer {Name"Conv1"//convolutional Layer Name Conv1Type"Convolution"//Layer type is convolution layerBottom"Data"//This layer uses the data from the previous layerTo generate the next layer of CONV1The BlobTop"Conv1"param {Lr_mult:1//Weight parameter WThe Learning rate Multiple}param {Lr_mult:2//Offset parameter BThe Learning rate Multiple}Convolution_param {Num_output:20//Number of output units 20Kernel_size:5//Convolution core size is 5*5Stride1//Step size is 1Weight_filler {//Allows the use of random values to initialize weights and bias valuesType"Xavier"//Using XavierThe algorithm automatically determines the initial size based on the number of input-output neurons}Bias_filler {Type"Constant"//Offset value initialized to constant, default = 0}}Layer {Name"Pool1"//Layer name is Pool1Type"Pooling"//Layer type is poolingBottom"Conv1"//The upper layer of this layer is CONV1To generate the next layer of pool1The BlobTop"Pool1"Pooling_param {PoolingThe parameters of the layerPool:maxPoolingIs the way that MaxKernel_size:2PoolingNuclear is 2*2.Stride2PoolingStep is 2}}Layer {Name"Conv2"//The second convolutional layer is the same as the first convolutional layer, except that the convolution core is 50Type"Convolution"Bottom"Pool1"Top"Conv2"param {Lr_mult:1}param {Lr_mult:2}Convolution_param {Num_output:50Kernel_size:5Stride1Weight_filler {Type"Xavier"}Bias_filler {Type"Constant"}}}Layer {Name"Pool2"//A second poolingLayer, with the first poolingSame layerType"Pooling"Bottom"Conv2"Top"Pool2"Pooling_param {Pool:maxKernel_size:2Stride2}}Layer {//Fully connected LayerName"Ip1"//Full Connection layer name Ip1Type"Innerproduct"//Layer type is fully connected layerBottom"Pool2"Top"Ip1"param {Lr_mult:1}param {Lr_mult:2}Inner_product_param {//Parameters of the fully connected layerNum_output:500//Output 500A nodeWeight_filler {Type"Xavier"}Bias_filler {Type"Constant"}}}Layer {Name"RELU1"ReLULayerType"ReLU"//Layer name is RELU1Bottom"Ip1"//Layer type is ReluTop"Ip1"}Layer {Name"IP2"//Second fully connected layerType"Innerproduct"Bottom"Ip1"Top"IP2"param {Lr_mult:1}param {Lr_mult:2}Inner_product_param {Num_output:10//Output 10Units ofWeight_filler {Type"Xavier"}Bias_filler {Type"Constant"}}}Layer {Name"Accuracy"Type"Accuracy"bottom: " ip2" bottom: " label" Top: "accuracy" include { phase:test}}layer { //l OSS layer, Softmax_loss layer realizes Softmax and multiple logistic losses name: " loss" type: " Softmaxwithloss " bottom: " ip2 " Bottom: " label " Top: " loss "}    
4. Training Network

Run./examples/mnist/train_lenet.sh. The execution of this script is that the definition in Lenet_solver.prototxt is actually running.

# The Train/test net protocol buffer definitionNet"Examples/mnist/lenet_train_test.prototxt"//Network Specific definition# TEST_ITER Specifies how many forward passes the test should carry out.# in the case of MNIST, we have test batch size and test iterations,# covering the full testing images.Test_iter:100TestNumber of iterations, if batch_size=100, then 100A batch of pictures, training 100Times, can cover 1000Sketch# Carry out testing every training iterations.Test_interval:500//Training iterations 500Times, Test once# The base learning rate, momentum and the weight decay of the network.BASE_LR:0.01//Network parameters: Learning rate, momentum, weight attenuationMomentum0.9Weight_decay:0.0005# The Learning rate policy//Learning Strategies: Having a fixed learning rate and decreasing learning rate per stepLr_policy:"INV"//Decreasing learning rate for current useGamma0.0001Power0.75# Display every iterations// every iteration 100 times show 100# the maximum number of iterations // maximum iteration algebra max_iter: 10000< Span class= "Hljs-meta" ># snapshot Intermediate results // every 5000 Span class= "Zh-hans" > iterations store data snapshot: 5000snapshot_prefix: " examples/mnist/lenet "# Solver Mode:cpu or Gpusolver_mode:cpu // This example uses Cpu training            

After the data training is finished, the following four files are generated:

5. Test the network

Run./build/tools/caffe.bin Test-model=examples/mnist/lenet_train_test.prototxt-weights=examples/mnist/lenet_iter _10000.caffemodel

Test: Indicates a testing, not a training, of a well-trained model. Other parameters include train, time, Device_query.

-MODEL=XXX: Specifies the model Prototxt file, which is a text file detailing the network structure and data set information.

As can be seen from the printout above, the average success rate of accruacy in the test data is 98%.

mnist handwriting test handwritten number pictures must meet the following conditions:
    • Must be 256 bits black and white
    • Must be white on the black background
    • The pixel size must be 28*28
    • Number in the middle of the picture, up and down there is no too much blank.
Test picture

Handwritten digit recognition script
Import OSImport SysImport NumPyAs NPimport Matplotlib.pyplot as pltcaffe_root = '/home/lynn/caffe/' sys.  Path.insert (0, caffe_root + ' python ')import caffemodel_file = '/home/lynn/caffe/examples/mnist/ Lenet.prototxt ' pretrained = ' /home/lynn/caffe/examples/mnist/lenet_iter_10000.caffemodel ' IMAGE_FILE = '/ Home/lynn/test.bmp ' input_image = Caffe.io.load_image (image_file, color=false) #print input_imagenet = Caffe. Classifier (Model_file, pretrained) prediction = Net.predict ([input_image], oversample = False) caffe.set_mode_cpu () Print ' predicted class: ', prediction[0].argmax ()          
Test results



Caffe Mnist Instance--lenet_train_test.prototxt network configuration detailed

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.