Caffe Learning Notes-prototxt file definition and Reading

Source: Internet
Author: User

In Caffe, the model is defined in the. prototxt file, and the structure information for each layer is defined in the file.

Define input:

Input: "Data"
input_shape {
  dim:1
  dim:3
  dim:900
  dim:900
}

That is, the definition input named data,batch_size=1, num_channels=3, input_height=900, input_width=900

Define the network layer, taking the convolution layer as an example:

Layer {
  name: "Conv1_1"
  type: "Convolution"
  bottom: "Data"
  Top: "conv1_1"
  Convolution_param {
    num_output:64
    kernel_size:3
  }
}

The name of the layer can be defined by itself. Type represents the properties of the layer, such as Relu,type=relu,pooling,type=poolin, which is the convolution layer, so type=convolution,bottom the input to the layer, Input data is entered here. Top indicates that the output of the layer Data,*_param defines the parameter information for that layer, such as

Convolution_param {

num_output:64
Kernel_size:3

}

Indicates that the convolutional core size of the convolution layer is a 3x3 3\times3,output channels of 64, indicating that the convolution layer has 64 convolution cores.

Caffe, the model file is read through the Caffe.net () function,

Import Caffe
net = caffe.net (Dataset.model_path)

To load a model that is already well trained:

NET = Caffe.net (Model_path, Pretrained_path,caffe. TEST)

The output values of the network layer in the Caffe, as well as the input images, are stored in blob form, and to obtain this data can be obtained through net.blobs, for example, we want to get input data:

net.blobs[' data '

Each blob holds information such as data and its gradient (diff), shape, total number of parameters (count). Parameter acquisition, for example, to get the shape of the input data:

Input_dims = net.blobs[' data '].shape
batch_size, Num_channels, input_height, input_width = Input_dims

Batch_size=1, num_channels=3, input_height=900, input_width=900, is the input format we defined in the model file.

Gets the variable value, which is the parameter to be learned for each layer, such as the convolution kernel weight w,b for each layer:

Params=net.params

For example, we want to obtain the convolution and weight w,b of the conv1_1 layer:

w1=net.params[' conv1_1 '][0].data
b1=net.params[' conv1_1 '][1].data


The convolution kernel size in conv1_1 is 3x3 3\times3, the input channels is 3, and the output channels is 64, so the weight matrix W size is 64x3x3x3 64\times3\times3\times3.

The complete code for the above experiment is as follows:

Import sys
caffe_root= '/home/program/caffe '
sys.path.insert (0, Caffe_root + '/python ')
import Caffe
model_path= ' models/dilation8_pascal_voc_deploy.prototxt '
pretrained_path= ' pretrained/dilation8_pascal _voc.caffemodel '
net = caffe.net (Model_path,pretrained_path,caffe. TEST)

blobs=net.blobs
input_dims = blobs[' data '].shape
batch_size, Num_channels, Input_height, Input_ width = input_dims

params=net.params
w1=net.params[' ct_conv5_1 '][0].data
b1=net.params[' ct_conv5_ 1 '][1].data

Dilation8_pascal_voc_deploy.prototxt is a model file, Dilation8_pascal_voc.caffemodel is a well-trained model.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.