Caffe source code modification: extract features of any image

Source: Internet
Author: User
Tags prefetch shuffle


Caffe source code modification: extract features of any image

Currently, Caffe is not perfect. The input image data must be in the path specified by prototxt. However, we often have the following requirement: to obtain a model file after training, we want to use this model file to extract features or predict classification from an image. It is inconvenient to specify a path in prototxt. Therefore, such a tool is what we need: to pass the image path to an executable file through the command line, then caffe reads the image data for a Forward propagation.


So I made such a tool to extract the features of any image.

The usage of this tool is as follows:


Extract_one_feature.bin./model/examples./examples/_ temp/imagenet_val.prototxt fc7./examples/_ temp/Features/Media/G/imageset/clothing/sweater/knit shirt _426.jpg CPU

Parameter 1:./model/caffe_reference_imagenet_model is the model file after training.

Parameter 2:./examples/_ temp/imagenet_val.prototxt network configuration file

Parameter 3: fc7 is the Blob name.

Parameter 4:./examples/_ temp/Features Save the image features in the file

Parameter 5: Image path

Parameter 6: GPU or CPU Mode


(In fact, I also think of a better tool. If the executable file is in the listening mode, it is to pass the image path to the process in a certain way, and the process will be executed when it receives the task.

In this case, you do not need to apply for memory space every time you extract an image. (* ^__ ^ *) ......)


The following is a preliminary modification method. You can modify it as needed.


Extract_one_feature.cpp (this file has been referenced in the source code to modify extract_features.cpp)

# Include <stdio. h> // For snprintf # include <string> # include <vector> # include <iostream> # include <fstream> # include "Boost/algorithm/string. HPP "# include" Google/protobuf/text_format.h "# include" leveldb/DB. H "# include" leveldb/write_batch.h "# include" caffe/blob. HPP "# include" caffe/Common. HPP "# include" caffe/net. HPP "# include" caffe/proto/Caffe. pb. H "# include" caffe/util/Io. HPP "# include" caffe/vision_laye Rs. HPP "using namespace caffe; // nolint (build/namespaces) template <typename dtype> int feature_extraction_pipeline (INT argc, char ** argv); int main (INT argc, char ** argv) {return feature_extraction_pipeline <float> (argc, argv); // return feature_extraction_pipeline <double> (argc, argv );} template <typename dtype> class writedb {public: void open (string dbname) {dB. open (dbname. c_str ();} void write (const dtype & Data) {dB <data;} void write (const string & Str) {dB <STR;} virtual ~ Writedb () {dB. close () ;}private: STD: ofstream dB ;}; template <typename dtype> int feature_extraction_pipeline (INT argc, char ** argv) {:: Google :: initgooglelogging (argv [0]); const int num_required_args = 6; If (argc <num_required_args) {log (error) <"this program takes in a trained network and an input data layer, and then" "extract features of the input data produced by the net. \ n "" Usage: extract_fe Atures pretrained_net_param "" feature_extraction_proto_file extract_feature_blob_name1 [, name2,...] "" save_feature_leveldb_name1 [, name2,...] img_path [CPU/GPU] "" [device_id = 0] \ n "" Note: you can extract multiple features in one pass by specifying "" multiple feature blob names and leveldb names seperated ','. "" The names cannot contain white space characters and the number of blobs "" and Eldbs must be equal. "; return 1;} int arg_pos = num_required_args; If (argc> arg_pos & strcmp (argv [arg_pos]," GPU ") = 0) {log (error) <"using GPU"; uint device_id = 0; If (argc> arg_pos + 1) {device_id = atoi (argv [arg_pos + 1]); check_ge (device_id, 0);} log (error) <"using device_id =" <device_id; caffe: setdevice (device_id); caffe: set_mode (caffe :: GPU);} el Se {log (error) <"using CPU"; caffe: set_mode (caffe: CPU);} caffe: set_phase (caffe: Test); arg_pos = 0; // The Name Of The executable string pretrained_binary_proto (argv [++ arg_pos]); // network model parameter file string feature_extraction_proto (argv [++ arg_pos]); shared_ptr <net <dtype> feature_extraction_net (new net <dtype> (feature_extraction_proto); feature_extraction_net-> copytrainedlayersfrom (pretrained_binary _ PROTO); // load the network parameters into the memory string extract_feature_blob_names (argv [++ arg_pos]); vector <string> blob_names; // name of the layer to extract features, multiple boost: Split (blob_names, extract_feature_blob_names, boost: is_any_of (","); string save_feature_leveldb_names (argv [++ arg_pos]); vector <string> leveldb_names; // here I rewrite levedb as a file. The data format does not use the real levedb, but the custom boost: Split (leveldb_names, save_feature_leveldb_names, boost:: is_any_of (","); Check_eq (blob_names.size (), evaluate () <"the number of Blob names and leveldb names must be equal"; size_t num_features = blob_names.size (); for (size_t I = 0; I <num_features; I ++) {check (feature_extraction_net-> has_blob (blob_names [I]) // check whether the Blob name exists in the Network <"unknown feature blob name" <blob_names [I] <"in the network" <feature_extraction_proto;} vector <shared_pt R <writedb <dtype> feature_dbs; For (size_t I = 0; I <num_features; ++ I) // open the database and prepare to write data to {log (Info) <"Opening DB" <leveldb_names [I]; writedb <dtype> * DB = new writedb <dtype> (); DB-> open (leveldb_names [I]); feature_dbs.push_back (shared_ptr <writedb <dtype> (db);} log (error) <"extacting features "; const shared_ptr <layer <dtype> layer = feature_extraction_net-> layer_by_name ("data"); // obtain the first layer of myi Magedatalayer <dtype> * my_layer = (myimagedatalayer <dtype> *) layer. get (); my_layer-> setimgpath (argv [++ arg_pos], 1 ); // "/Media/G/imageset/clothing/sweater/knitting shirt _1.jpg" // set the image path vector <blob <float> *> input_vec; vector <int> image_indices (num_features, 0); int num_mini_batches = 1; // atoi (argv [++ arg_pos]); // The total number of iterations. The number of each iteration is specified for (INT batch_index = 0; batch_index <num_mini_batches; ++ batch_index) in prototxt using batchsize. // The total num_mini_batches iterations {forward-> forward (input_vec ); // One Forward Propagation for (INT I = 0; I <num_features; ++ I) // multi-layer features {const shared_ptr <blob <dtype> feature_blob = feature_extraction_net-> blob_by_name (blob_names [I]); int batch_size = feature_blob-> num (); int dim_features = feature_blob-> Co Unt ()/batch_size; dtype * feature_blob_data; For (INT n = 0; n <batch_size; ++ N) {feature_blob_data = feature_blob-> mutable_cpu_data () + feature_blob-> offset (n); feature_dbs [I]-> write ("3"); For (INT D = 0; D <dim_features; ++ D) {feature_dbs [I]-> write (dtype) (D + 1); feature_dbs [I]-> write (":"); feature_dbs [I]-> write (feature_blob_data [d]); feature_dbs [I]-> write ("");} feature_dbs [I]-> write ("\ n ") ;}// For (INT n = 0; n <batch_size; ++ n)} // For (INT I = 0; I <num_features; ++ I )} // For (INT batch_index = 0; batch_index <num_mini_batches; ++ batch_index) log (error) <"successfully extracted the features! "; Return 0 ;}

My_data_layer.cpp (refer to image_data_layer to modify)

# Include <fstream> // nolint (readability/streams) # include <iostream> // nolint (readability/streams) # include <string> # include <utility> # include <vector> # include "caffe/layer. HPP "# include" caffe/util/Io. HPP "# include" caffe/util/math_functions.hpp "# include" caffe/util/RNG. HPP "# include" caffe/vision_layers.hpp "namespace caffe {template <typename dtype> myimagedatalayer <dtype> ::~ Myimagedatalayer <dtype> () {}template <typename dtype> void myimagedatalayer <dtype >:: setimgpath (string path, int label) {lines _. clear (); lines _. push_back (STD: make_pair (path, label);} template <typename dtype> void myimagedatalayer <dtype >:: setup (const vector <blob <dtype> *> & bottom, vector <blob <dtype> * Top) {layer <dtype>: setup (bottom, top); const int new_height = This-> layer_param _. image_data_param (). n Ew_height (); const int new_width = This-> layer_param _. image_data_param (). new_width (); check (new_height = 0 & new_width = 0) | (new_height> 0 & new_width> 0 )) <"current implementation requires" "new_height and new_width to be set at the same time. ";/** because the following image needs to be taken to initialize blob. * Therefore, you need an image on the hard disk. * 1. Read the path of an image from prototxt, * 2 You can also write the image path used for initialization to the dead * // * 1 * // * const string & source = This-> layer_param _. image_data_param (). source (); log (Info) <"opening file" <source; STD: ifstream infile (source. c_str (); string filename; int label; while (infile> FILENAME> label) {lines _. push_back (STD: make_pair (filename, label);} * // * 2 */lines _. push_back (STD: make_pair ("/home/linger/init.jpg", 1 )); // The above code 1 and 2 can use any lines_id _ = 0; // read a data point, and use it to initialize the top blob. (random) read an image to initialize blob datum; check (readimagetodatum (lines _ [lines_id _]. first, lines _ [lines_id _]. second, new_height, new_width, & datum); // image const int crop_size = This-> layer_param _. image_data_param (). crop_size (); const int batch_size = 1; // This-> layer_param _. image_data_param (). batch_size (); const String & mean_file = This-> layer_param _. image_data_param (). mean_file (); If (crop_size> 0) {(* Top) [0]-> reshape (batch_size, datum. channels (), crop_size, crop_size); prefetch_data _. reshape (batch_size, datum. channels (), crop_size, crop_size);} else {(* Top) [0]-> reshape (batch_size, datum. channels (), datum. height (), datum. width (); prefetch_data _. reshape (batch_size, datum. channels (), datum. height (), d Atum. width ();} log (Info) <"output data size:" <(* Top) [0]-> num () <", "<(* Top) [0]-> channels () <", "<(* Top) [0]-> height () <", "<(* Top) [0]-> width (); // label (* Top) [1]-> reshape (batch_size, 1, 1, 1 ); prefetch_label _. reshape (batch_size, 1, 1, 1); // datum size datum_channels _ = datum. channels (); datum_height _ = datum. height (); datum_width _ = datum. width (); datum_size _ = datum. channels () * Datum. height () * datum. width (); check_gt (datum_height _, crop_size); check_gt (datum_width _, crop_size); // check if we want to have mean if (this-> layer_param _. image_data_param (). has_mean_file () {blobproto blob_proto; log (Info) <"loading mean file from" <mean_file; readprotofrombinaryfile (mean_file.c_str (), & blob_proto); data_mean _. fromproto (blob_proto); check_eq (data_mean _. num (), 1); Che Ck_eq (data_mean _. channels (), datum_channels _); check_eq (data_mean _. height (), datum_height _); check_eq (data_mean _. width (), datum_width _);} else {// simply Initialize an all-empty mean. data_mean _. reshape (1, datum_channels _, datum_height _, datum_width _);} // now, start the prefetch thread. before calling prefetch, we make two // cpu_data callso that the prefetch thread does not accidentally Mak E // simultaneous cudamalloc callwhen the main thread is running. in some // GPUs this seems to cause failures if we do not so. prefetch_data _. mutable_cpu_data (); prefetch_label _. mutable_cpu_data (); data_mean _. cpu_data () ;}// -------------------------------- the following is an image data reading template <typename dtype> void myimagedatalayer <dtype >:: fetchdata () {datum; ch Eck (prefetch_data _. count (); dtype * top_data = prefetch_data _. mutable_cpu_data (); dtype * top_label = prefetch_label _. mutable_cpu_data (); imagedataparameter image_data_param = This-> layer_param _. image_data_param (); const dtype scale = image_data_param.scale (); // image_data_layer related parameter const int batch_size = 1; // Merge (); here we only need one image const int crop_size = bytes (); Const bool mirror = image_data_param.mirror (); const int new_height = image_data_param.new_height (); const int new_width = rows (); If (mirror & crop_size = 0) {log (fatal) <"current implementation requires mirror and crop_size to be" <"set at the same time. ";}// datum scales const int channels = datum_channels _; const int Height = datum_height _; const int width = Datum_width _; const int size = datum_size _; const int lines_size = lines _. size (); const dtype * mean = data_mean _. cpu_data (); For (INT item_id = 0; item_id <batch_size; ++ item_id) {// read an image // get a blob check_gt (lines_size, lines_id _); if (! Readimagetodatum (lines _ [lines_id _]. first, lines _ [lines_id _]. second, new_height, new_width, & datum) {continue;} const string & Data = datum. data (); If (crop_size) {check (data. size () <"image cropping only support uint8 data"; int h_off, w_off; // we only do random crop when we do training. h_off = (height-crop_size)/2; w_off = (width-crop_size)/2; // normal copy reads normally, read the cropped image data to top_data for (int c = 0; C <channels; ++ c) {for (INT h = 0; H <crop_size; ++ H) {for (int w = 0; W <crop_size; ++ W) {int top_index = (item_id * channels + C) * crop_size + H) * crop_size + W; int data_index = (C * height + H + h_off) * width + W + w_off; dtype datum_element = static_cast <dtype> (static_cast <uint8_t> (data [data_index]); top_data [top_index] = (datum_element-mean [data_index]) * scale ;}}} else {// just copy the whole data read normally, read the image data to top_data if (data. size () {for (Int J = 0; j <size; ++ J) {dtype datum_element = static_cast <dtype> (static_cast <uint8_t> (data [J]); top_data [item_id * size + J] = (datum_element-mean [J]) * scale ;}} else {for (Int J = 0; j <size; ++ J) {top_data [item_id * size + J] = (datum. float_data (j)-mean [J]) * scale ;}} top_label [item_id] = datum. label (); // read the label of the image} template <typename dtype> dtype myimagedatalayer <dtype >:: forward_cpu (const vector <blob <dtype> *> & bottom, vector <blob <dtype> * Top) {// update inputfetchdata (); // copy the data caffe_copy (prefetch_data _. count (), prefetch_data _. cpu_data (), (* Top) [0]-> mutable_cpu_data (); caffe_copy (prefetch_label _. count (), prefetch_label _. cpu_data (), (* Top) [1]-> mutable_cpu_data (); Return dtype (0 .);} # ifdef cpu_onlystub_gpu_forward (imagedatalayer, forward); # endifinstantiate_class (myimagedatalayer);} // namespace caffe


Add the code in data_layers.hpp. Refer to the code written by imagedatalayer.

template <typename Dtype>class MyImageDataLayer : public Layer<Dtype>  { public:  explicit MyImageDataLayer(const LayerParameter& param)      : Layer<Dtype>(param) {}  virtual ~MyImageDataLayer();  virtual void SetUp(const vector<Blob<Dtype>*>& bottom,      vector<Blob<Dtype>*>* top);  virtual inline LayerParameter_LayerType type() const {    return LayerParameter_LayerType_MY_IMAGE_DATA;  }  virtual inline int ExactNumBottomBlobs() const { return 0; }  virtual inline int ExactNumTopBlobs() const { return 2; }  void fetchData();  void setImgPath(string path,int label); protected:  virtual Dtype Forward_cpu(const vector<Blob<Dtype>*>& bottom,      vector<Blob<Dtype>*>* top);  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,      const vector<bool>& propagate_down, vector<Blob<Dtype>*>* bottom) {}  vector<std::pair<std::string, int> > lines_;  int lines_id_;  int datum_channels_;  int datum_height_;  int datum_width_;  int datum_size_;  Blob<Dtype> prefetch_data_;  Blob<Dtype> prefetch_label_;  Blob<Dtype> data_mean_;  Caffe::Phase phase_;};


Modify Caffe. proto and add the following information at the appropriate position. It is also written in image_data.


My_image_data = 36;


Optional myimagedataparameter my_image_data_param = 36;


// Message that stores parameters used by myimagedatalayer
Message myimagedataparameter {
// Specify the data source.
Optional string source = 1;
// For data pre-processing, We can do simple scaling and subtracting
// Data mean, if provided. Note that the mean Subtraction is always carried
// Out before scaling.
Optional float scale = 2 [default = 1];
Optional string mean_file = 3;
// Specify the batch size.
Optional uint32 batch_size = 4;
// Specify if we wowould like to randomly crop an image.
Optional uint32 crop_size = 5 [default = 0];
// Specify if we want to randomly mirror data.
Optional bool mirror = 6 [default = false];
// The rand_skip variable is for the data layer to skip a few data points
// To avoid all asynchronous SGD clients to start at the same point. The Skip
// Point wocould be set as rand_skip * rand (0, 1). Note that rand_skip shocould not
// Be larger than the number of keys in the leveldb.
Optional uint32 rand_skip = 7 [default = 0];
// Whether or not imagelayer shocould shuffle the list of files at every epoch.
Optional bool shuffle = 8 [default = false];
// It will also resize images if new_height or new_width are not zero.
Optional uint32 new_height = 9 [default = 0];
Optional uint32 new_width = 10 [default = 0];
}


The positions of the above lines are different. For details, refer to the location corresponding to reading an image_data file.



Linger

Link: http://blog.csdn.net/lingerlanlan/article/details/39400375



Caffe source code modification: extract features of any image

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.