Caffe Combat series: How to write your own data layer (take deep Spatial net as an example)

Source: Internet
Author: User
First, the prefaceWant to write their own layers, First you have to define the parameters of your own layer in the Caffe.proto, so that you can configure the parameters in the proto configuration file, and then you have to declare in Caffe.proto that the parameters of your layer are optional, and then you have to add your own HPP header files to the Caffe include directory and Caffe SRC under the layer directory to add your own CPP implementation file. This paper describes how to write your own layer with the Data_heatma.cpp and DATA_HEATMAP.HPP implemented in Https://github.com/tpfister/caffe-heatmap.
Ii. Specific Practices (1) First you need to declare in the Caffe.proto that the layer usage parameters you write are optional:For example, first add heatmapdataparameter to the red position below [CPP] View Plain copy// layer type-specific parameters.     //      // Note: certain layers may have more than one  computational engine     // for their implementation. These  layers include an engine type and     // engine  parameter for selecting the implementation.     // The  default for the engine is set by the engine switch at  compile-time.     optional AccuracyParameter accuracy_param = 102;      optional ArgMaxParameter argmax_param = 103;      optional ConcatParameter concat_param = 104;     optional  Contrastivelossparameter contrastive_loss_param = 105;     optional convolutionparameter convolution_param  = 106;     optional DataParameter data_param = 107;      optional DropoutParameter dropout_param = 108;      optional dummydataparameter dummy_data_param = 109;     optional  eltwiseparameter eltwise_param = 110;     optional EmbedParameter  embed_param = 137;     optional ExpParameter exp_param = 111;      optional FlattenParameter flatten_param = 135;      optional HeatmapDataParameter heatmap_data_param = 140;//  add your own layer of parameters      optional HDF5DataParameter hdf5_data_param = 112;      optional  hdf5outputparameter hdf5_output_param = 113;     optional  hingelossparameter hinge_loss_param = 114;     optional  imagedataparameter image_data_param = 115;     optional  infogainlossparameter infogain_loss_param = 116;     optional  innerproductparameter inner_product_param = 117;     optional  logparameter log_param = 134;     optional lrnparameter lrn_param  = 118;     optional MemoryDataParameter memory_data_param =  119;     optional MVNParameter mvn_param = 120;      optional poolingparameter pooling_param = 121;     optional  powerparameter power_param = 122;     optional PReLUParameter prelu_param = 131;     optional  pythonparameter python_param = 130;     optional reductionparameter  reduction_param = 136;     optional reluparameter relu_param  = 123;     optional ReshapeParameter reshape_param = 133;      optional SigmoidParameter sigmoid_param = 124;      optional SoftmaxParameter softmax_param = 125;     optional  sppparameter spp_param = 132;     optional sliceparameter slice_ param = 126;     optional TanHParameter tanh_param = 127;      optional ThresholdParameter threshold_param = 128;     optional tileparameter tile_param = 138;     optional WindowDataParameter  window_data_param = 129;  }  
Because we are defining the parameters below the V1layerparameter layer, we need to add the following lines to the upgrade_proto.cpp below \src\caffe\util to facilitate the conversion of the model that has already been trained.
[CPP] View Plain copy Const char* upgradev1layertype (const v1layerparameter_layertype type)  {      switch  (type)  {     case v1layerparameter_layertype_ none:       return  "";     case v1layerparameter_ layertype_absval:       return  "Absval";     case  v1layerparameter_layertype_accuracy:       return  "Accuracy";     case V1LayerParameter_LayerType_ARGMAX:       return  " Argmax ";     case V1LayerParameter_LayerType_BNLL:        return  "BNLL";     case V1LayerParameter_LayerType_CONCAT:        return  "Concat";     case v1layerparameter_layertype_ Contrastive_loss:       return  "Contrastiveloss";     case  v1layerparameter_layertype_convolution:       return  "convolution";      case V1LayerParameter_LayerType_DECONVOLUTION:        return  "Deconvolution";     case V1LayerParameter_LayerType_DATA:       return  "Data";     case v1layerparameter_layertype_data_ heatmap://  This is the layer of input data that we added ourselves        return  "Dataheatmap";          case V1LayerParameter_LayerType_DROPOUT:        return  "Dropout";     case v1layerparameter_layertype_dummy_data:        return  "Dummydata";     case v1layerparameter _layertype_euclidean_loss:       return  "Euclideanloss";     case v1layerparameter_ layertype_euclidean_loss_heatmap://  This is the layer we added to compute the loss function        return  " Euclideanlossheatmap ";         case v1layerparameter_layertype_ eltwise:       return  "Eltwise";     case  v1layerparameter_layertype_exp:       return  "EXP";      case v1layerparameter_layertype_flatten:       return  "Flatten";      case V1LayerParameter_LayerType_HDF5_DATA:       return   "Hdf5data";     case V1LayerParameter_LayerType_HDF5_OUTPUT:        return  "Hdf5output";     case v1layerparameter_layertype_hinge_ loss:       return  "Hingeloss";     case v1layerparameter_layertype_im2col:        return  "Im2col";     case v1layerparameter_ layertype_image_data:       return  "ImageData";     case  V1LayerParameter_LayerType_INFOGAIN_LOSS:       return  "Infogainloss";      case V1LayerParameter_LayerType_INNER_PRODUCT:        return  "Innerproduct";     case V1LayerParameter_LayerType_LRN:       return  "LRN";     case v1layerparameter_layertype_memory_ data:       return  "Memorydata";     case  v1layerparameter_layertype_multinomial_logistic_loss:       return  " Multinomiallogisticloss";     case V1LayerParameter_LayerType_MVN:       return   "MVN";     case V1LayerParameter_LayerType_POOLING:        return  "Pooling";     case V1LayerParameter_LayerType_POWER:        return  "Power";     case v1layerparameter_layertype _relu:       return  "Relu";     case  v1layerparameter_layertype_sigmoid:       return  "sigmoid";     case V1LayerParameter_LayerType_SIGMOID_CROSS_ENTROPY_LOSS:        return  "Sigmoidcrossentropyloss";     case v1layerparameter_layertype_ silence:       return  "Silence";     case  V1layerparameter_layertype_softmax: &nBsp     return  "Softmax";     case v1layerparameter_layertype_ softmax_loss:       return  "Softmaxwithloss";     case  V1LayerParameter_LayerType_SPLIT:       return  "SPLIT";     case V1LayerParameter_LayerType_SLICE:       return  "SLICE" ;     case V1LayerParameter_LayerType_TANH:       return   "TanH";     case V1LayerParameter_LayerType_WINDOW_DATA:        return  "Windowdata";     case v1layerparameter_layertype_threshold:        return  "Threshold";     default:        log (FATAL)  <<  "unknown v1layerparameter layer type: "  << type;       return  "";     }  }   


(2) then add the parameters of your own layer to the following position in the Caffe.proto:
[CPP] View plain copy// vgg heatmap params  The parameters of your own layer    Message heatmapdataparameter  {     optional bool segmentation = 1000 [default =  false];      optional uint32 multfact = 1001 [default  = 1];     optional uint32 num_channels = 1002 [default  = 3];     optional uint32 batchsize = 1003;     optional string root_img_dir = 1004;     optional bool  random_crop = 1005;   // image augmentation type     optional bool sample_per_cluster = 1006;   // image  sampling type     optional string labelinds = 1007 [default  =  '];   // if specified, only use these regression variables      optional string source = 1008;     optional  string meanfile = 1009;     optional string crop_meanfile =  1010;     optional uint32 cropsize = 1011 [default =  0];     optional uint32 outsize = 1012 [default =  0];     optional float scale = 1013 [ default = 1  ];     optional uint32 label_width = 1014 [ default  = 1 ];     optional uint32 label_height = 1015 [  default = 1 ];     optional bool dont_flip_first =  1016 [ default = true ];     optional float angle_max = 1017  [ default = 0 ];      optional bool flip_joint_labels  = 1018 [ default = true ];  }   also has visual test parameters [Plain]View plain copy/note//Update the next available ID when you add

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.