The prototxt of lenet--network structure

Source: Internet
Author: User

Lenet is Caffe The first example of learning, examples from Caffe official website: http://caffe.berkeleyvision.org/gathered/examples/mnist.html

The interface part is written in Python, so only run the example can not see the CPP code

1. Depending on the path, we'll look at the general configuration file

CD $CAFFE _root./examples/mnist/train_lenet.sh

2. After opening, we can see just two lines

#!/usr/bin/env Sh./build/tools/caffe train–solver=examples/mnist/lenet_solver.prototxt (dependent configuration file)

I'm here called Lenet_solver.prototxt dependency configuration file, the key is solver.prototxt

3. Then open the Dependencies profile

# The Train/test net Protocol buffer definition (development training and test model) NET: "Examples/mnist/lenet_train_test.prototxt" (Network configuration file location) # Test_iter specifies how many forward passes the test should carry out.# in the case of MNIST, we have test batch size 100 and test iterations,# covering the full testing images.test_iter:100 (1 times 100 test set samples to participate in the forward calculation) # Carry out testing Eve  RY Training iterations.test_interval:500 (tested 500 times per training) # The base learning rate, momentum and the weight decay of the network.base_lr:0.01 (Basic Learning rate) momentum:0.9 (momentum) weight_decay:0.0005 (weight decay) # The Learning Rates policy (Learning strategy) Lr_policy: "Inv (Inv:return BASE_LR * (1 + gamma * iter) ^ (-Power)) gamma:0.0001power:0.75# Display every iterationsdisplay:100 ( ) (100 print results per iteration) # The maximum number of iterationsmax_iter:10000 (maximum iterations) # Snapshot Intermediate Resultssnapshot: 5000 (5,000 iterations save a temporary model named Lenet_iter_5000.caffemodel) Snapshot_prefix: "Examples/mnist/lenet" # Solver MODE:CPU or GPUSOLVER_MODE:GPU (GPU switch)

See Lenet_train_test.prototxt "(I am called the network configuration file, which is stored in the network structure)

4. We open the network structure of this file

Name: "LeNet" Network name layer {name: "mnist" This layer name type: "Data" layer Type top: "Data" next layer Interface Top: "label" next layer interface include {Phase:trai N} transform_param {scale:0.00390625#1/256, preprocessing such as minus mean, dimension transform, random cut, mirror etc} data_param {Source: "Examples/mnist/mnist_ Train_lmdb "Training Data Location BATCH_SIZE:64 Training sample number Backend:lmdb read in training data format, default Leveldb}}layer {name:" mnist "type:" Data "t OP: "Data" Top: "label" include {phase:test} transform_param {scale:0.00390625} data_param {Source: "Examples/mnist/mnist_test_lmdb" batch_size:100 a test using 100 data backend:lmdb}}layer {name: "CONV1" type: "Convolu tion "convolution layer bottom:" Data "on the first layer of" data "Top:" CONV1 "next Layer Interface" Conv1 "param {lr_mult:1 (Weights learning rate is the same as the global)} param {Lr_mu Lt:2 (biases's learning rate is twice times global)} convolution_param {num_output:20 convolution core 20 kernel_size:5 convolution core size 5x5 stride:1 Step 1 weigh T_filler {type: "Xavier" (random initialization weights and deviations)} Bias_filler {type: "constant" bias with 0 initialization}}}layer {name: "P Ool1 "type:" Pooling "poolingLayer bottom: "conv1" Upper "conv1" Top: "pool1" Downlevel Interface "Pool1" Pooling_param {pool:max pooling function with max Kernel_size:2 pooled kernel function size 2x2 Stride:2 Step 2}}layer {name: "Conv2" type: "Convolution" bottom: "pool1" Top: "Conv2" param {LR_MULT:1} para m {Lr_mult:2} convolution_param {num_output:50 convolution core 50 x kernel_size:5 stride:1 Weight_filler {t   ype: "Xavier"} bias_filler {type: "constant"}}}layer {name: "pool2" type: "Pooling" bottom: "Conv2" Top: "Pool2" Pooling_param {pool:max kernel_size:2 stride:2}}layer {name: "ip1" type: "Innerproduct" full Connection layer Bottom: "pool2" Upper Connection "pool2" Top: "ip1" "Downlevel output Interface ip1" param {lr_mult:1} param {Lr_mult:2} inner_product_  param {num_output:500 Output qty Weight_filler {type: ' Xavier '} bias_filler {type: ' constant '} }}layer {name: "RELU1" type: "ReLU" activation function bottom: "ip1" Top: "Ip1" (this place is still ip1, the bottom is the same as the top floor to reduce expenses, the next layer of the full connection layer input is also ip1)}layer {NA Me: "IP2" type: "InnerpRoduct "bottom:" ip1 "Top:" ip2 "param {lr_mult:1} param {Lr_mult:2} inner_product_param {NUM_OUTPU T:10 output of 10 Weight_filler {type: "Xavier"} bias_filler {type: "constant"}}}layer {name: "AC    Curacy "type:" Accuracy "bottom:" ip2 "upper connection ip2 Full connection layer bottom:" label "upper connection label Layer top:" Accuracy "output interface for accuracy include { Phase:test}}layer {name: "Loss" type: "Softmaxwithloss" loss function bottom: "ip2" bottom: "Label" Top: "Loss"}

  Actually I'm lazy here, because I started to study the network file is not this, but another, I put the detailed notes of the network file

Name: "LeNet" (the name of the network) layer {name: "Data" type: "Input" (layer type, input) Top: "Data" (the layer that imports it is not bottom, because it is the first layer) Input_param {shape : {dim:64 dim:1 dim:28 dim:28}} (64 images in a batch, 28*28 size)} Read this batch of data dimensions: 1 28layer {name: "CONV1" type: "Convolution" (convolution class Bottom: "Data" (the previous layer is called data) Top: "Conv1" (the next layer is called conv1) param {lr_mult:1 (Weights learning rate is the same as the global)} param {Lr_mult : 2 (biases's learning rate is twice times the global)} convolution_param {(convolution operation parameter setting) num_output:20 (convolution output Quantity 20, composed of 20 feature maps feature map) kernel_size:5 (The size of the convolution kernel is 5*5) stride:1 (convolution operation Step) Weight_filler {type: "Xavier" (random initialization weights and deviations)} Bias_filler {type: "Co Nstant "(bias using 0 initialization)}} (after convolution, the data becomes (28-5+1) * (28-5+1), 20 features)} This batch of data dimensions after the convolution: 24layer {name:" pool1 "type:" Pooli Ng "(lower sample type layer) Bottom:" Conv1 "Top:" Pool1 "Pooling_param {pool:max (bottom sampling, max) kernel_size:2 (under sample kernel function size) stri    De:2 (STEP)}} This batch of data dimensions after sampling: 12layer {name: "Conv2" type: "Convolution" bottom: "pool1" Top: "Conv2" param { Lr_mult:1} param {   Lr_mult:2} convolution_param {num_output:50 (50 convolution cores) Kernel_size:5 stride:1 weight_filler {typ E: "Xavier"} bias_filler {type: "Constant"}}} After convolution this batch of data dimensions: 8 8layer {name: "pool2" type: "Pooling "Bottom:" conv2 "Top:" Pool2 "Pooling_param {Pool:max kernel_size:2 Stride:2}} After sampling this batch of data dimensions: 4 4laye  R {Name: "ip1" type: "Innerproduct" (full connection type layer) Bottom: "Pool2" Top: "ip1" param {lr_mult:1} param {lr_mult: 2} inner_product_param {(Full connection layer parameter setting) num_output:500 (output is $) Weight_filler {type: "Xavier"} Bias_fill ER {type: "Constant"}} (4*4 data is obtained 1*1 data through 4*4 convolution)} after the full join layer this data dimension: 1 1layer {name: "RELU1" type: "ReLU" (excitation Live function type layer) Bottom: "ip1" Top: "Ip1" (this place or ip1, the bottom layer and the top level of the same reduce expenses, the next layer of the full connection layer input is also ip1)} through the Relu layer after the data dimension: 64 500 1 1 (do not change) layer {name: "IP2" type: "Innerproduct" bottom: "ip1" Top: "ip2" param {lr_mult:1} param {Lr_mult:2} inner_product_ param {num_output:10 (direct output, 0-9, 10 digits) Weight_filler {type: "Xavier"} bias_filler {type: "Constant"}} (Classification of data The judgment is completed in this layer)} after the full join layer this data dimension: 1 1layer {name: "Prob" type: "Softmax" (loss function) Bottom: "Ip2" Top: "Prob" (First data input is date , write a label here)}

Be aware that the input and output of the activation layer are a single cut to conserve resources

For the time being, because the focus is to do the image, and now see how to turn jpg into Lmdb, and then write their own network run through the paper code, the above information if there is an incorrect place please correct me, 3Q

The prototxt of lenet--network structure

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.