Mini-caffe compilation, test with BLVC Caffe compiled mnist model

Source: Internet
Author: User

Mini-caffe is a running version of the minimized Caffe, used only for forward, high efficiency and small footprint, so it is extremely suitable for online testing. However, if you implement the unofficial Caffe layer yourself, you also need to implement the corresponding calculation code in Mini-caffe.

This article compiles the Mini-caffe project with VS2015, records the Caffe training mnist data generation model and Deploy.prototxt, and classifies the test pictures. 1, compiling preparation

Download Mini-caffe items, Https://github.com/luoyetx/mini-caffe, unzip to your own storage directory.

In addition, Mini-caffe compiler needs PROTOBUF Library, also need to download their own compilation, download the address https://github.com/google/protobuf. (Recommended download under the release of)

In my case, the Mini-caffe storage address is as follows; protobuf decompression is stored under MINI-CAFFE-MASTER\3RDPARTY\SRC\PROTOBUF.

2, compile Protobuf

Open the CMake tool, specifying that the source directory is E:/program Files/tools/mini-caffe-master/3rdparty/src/protobuf/cmake,

The build directory is E:/program files/tools/mini-caffe-master/3rdparty/src/protobuf/cmake/build.

Click Configure, select compiler visual Studio 2015 Win64, uncheck Protobuf_build_tests and Protobuf_msvc_static_runtime, click Configure again, Then click Generate and the Protobuf.sln solution will be generated.

You can also use the CMake command:

CD 3rdparty/src/protobuf/cmake
mkdir build
CD build
cmake ...-dprotobuf_build_tests=off-dprotobuf_msvc_ Static_runtime=off-g "Visual Studio 2015 Win64"

Open Protobuf.sln, compile the debug and release versions, and save them under the Protobuf/cmake/build/debug and Protobuf/cmake/build/release folders.

With these 2 versions of the library file, you can compile the Mini-caffe. 3, compile Mini-caffe

There are 2 preparatory steps before compiling the Mini-cafe.

(1) To copy the dependent files to the specified location, the approximate process is:

Copy 3rdparty\src\protobuf\src under the Google folder to Mini-caffe-master\3rdparty\include\ (in fact, only need. h file);

Copy Protobuf\cmake\build\debug\libprotobufd.lib and protobuf\cmake\build\release\libprotobuf.lib to 3rdparty\lib\;

Copy Protobuf\cmake\build\release\protoc.exe to 3rdparty\bin\;

(2) Generate Caffe.pb.h and caffe.pb.cc

Using the previously replicated operation, execute the following code

"./3rdparty/bin/protoc"-i= "./src/proto"--cpp_out= "./src/proto" "./src/proto/caffe.proto"

Mini-caffe has done the above two steps of the script, you can click on the Mini-caffe-master directory to execute two batch files, respectively, Copydeps.bat and Generatepb.bat.

After you have completed both of these preparatory steps, you can use CMake to generate Mini-caffe.sln, either the command line or the CMake window tool. (In addition, you can also choose whether to use the GPU)

mkdir build
CD Build
cmake. G "Visual Studio 2015 Win64"

Similarly, compilation can generate debug and release two versions of Ceffe.lib and Caffe.dll. You can start the test with this later. 4. Test mnist Model

In the previous blog to use the official classification.exe to achieve the mnist handwriting character image recognition (portal), although the rewrite from a slight decrease in space, but the entire network has a lot of things can be removed, such as network reverse computing data, Compute layers that are not used in the test, and so on. In contrast, Mini-caffe only contains the forward calculation, from network net to go out of unwanted things, and in the forward calculation after emptying all the middle of the data, greatly reducing the space consumption, and faster calculation.

#include <caffe/caffe.hpp> #include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp > #include <opencv2/imgproc/imgproc.hpp> #include <chrono>/*! \brief Timer */class Timer {Using Clock = Std::chrono::high_resolution_clock public:/*! \brief start or restart time
	R/inline void Tic () {Start_ = Clock::now (); }
	/*!
	\brief Stop timer */inline void Toc () {end_ = Clock::now (); }
	/*! \brief return time in MS */inline Double elasped () {Auto Duration = Std::chrono::d uration_cast<std::chrono::millis
		Econds> (End_-Start_);
	return Duration.count ();
} private:clock::time_point Start_, End_;

};  using namespace Caffe;

Nolint (build/namespaces) using std::string;  static bool Paircompare (const std::p air<float, int>& lhs, const std::p air<float, int>& RHS) {return
Lhs.first > Rhs.first; } void Main () {string model_file = R "(E:\ProgramData\caffe-windows\data\mnist\windows\lenet.prototxt)"; String trained_file = R "(E:\ProgramData\caffe-windows\data\mnist\windows\snapshot_lenet_mean\_iter_10000.
	Caffemodel) ";
	String mean_file = R "(E:\ProgramData\caffe-windows\data\mnist\windows\mean.binaryproto)";
	String label_file = R "(E:\ProgramData\caffe-windows\data\mnist\windows\synset_words.txt)";

	string file = R "(E:\ProgramData\caffe-windows\data\mnist\windows\3.bmp)"; //Read Network NET, the specified data type is float, followed by the CV::	 The type of the mat should be cv_32f if (caffe::gpuavailable ()) Caffe::setmode (CAFFE::GPU, 0);
	If a GPU is available, set to GPU mode shared_ptr<caffe::net> net_ (new Caffe::net (Model_file));

	Net_->copytrainedlayersfrom (Trained_file); //Tag data (not available) Cv::size Input_geometry_
	;
	int Num_channels_;
	Cv::mat Mean_;
Std::vector<string> Labels_ = {"Zero", "one", "two", "three", "four", "five", "six", "seven", "Eight", "Nine"};
	//input layer, can only be obtained through blob_by_name, no input_		BLOBs () function//shared_ptr<caffe::blob> Input_layer = net_->blob_by_name ("Data");
	lowercase, although the Lenet.prototxt input layer name is "Data" Input_geometry_ = Cv::size (Input_layer->width (), Input_layer->height ());
	
	Num_channels_ = Input_layer->channels ();
	Std::cout << "input shape:" << input_layer->shape_string () << Std::endl; 
	Std::cout << "Input size:" << input_geometry_ << Std::endl;

	Std::cout << "Input channels:" << num_channels_ << Std::endl; //Read the mean image data from the file Shared_ptr<blob
	 
	> Mean_blob = readblobfromfile (mean_file);
	Std::vector<cv::mat> channels;
	
	float* data = Mean_blob->mutable_cpu_data (); for (int i = 0; i < Num_channels_ ++i) {//Cv::mat channel (Mean_blob->height (), Mean_bloB->width (), CV_32FC1, data);
	Channels.push_back (channel);
	Data + + mean_blob->height () * Mean_blob->width ();
	}//cv::mat mean;

	Cv::merge (channels, mean); Instead of the above code program, it is clear that the Blob is 1*1*28*28, and the shape of the Blob is entered as Cv::mat mean (Mean_blob->height (), Mean_blob->width (), cv_ 

	32FC1, data); Cv::scalar Channel_mean = Cv::mean (mean);

	Mean images, each pixel is the average luminance value of the mean image mean_ = Cv::mat (Input_geometry_, Mean.type (), Channel_mean); //Read the image and write the image data to the network input layer blob Cv::mat
	img = cv::imread (file,-1);
		if (!img.data) {std::cout << "unable to decode Image.\nquit." << Std::endl;
		System ("pause");
	Return //Clear input is single channel data 1*1*28*28 if (img.channels () = = 3 && Num_channels_ = = 1) cv::cvtcolor (IMG, IMG, cv::color_b
	Gr2gray);
	else if (img.channels () = = 4 && num_channels_ = 1) Cv::cvtcolor (IMG, IMG, cv::color_bgra2gray); else if (img.channels () = = 4 && num_cHannels_ = = 3 Cv::cvtcolor (IMG, IMG, CV::COLOR_BGRA2BGR);

	else if (img.channels () = = 1 && num_channels_ = 3) Cv::cvtcolor (IMG, IMG, CV::COLOR_GRAY2BGR);

	if (Img.size ()!= input_geometry_) cv::resize (IMG, IMG, input_geometry_);

	Img.convertto (IMG, CV_32FC1);  Input_layer->reshape ({1,1,28,28});

	Does not need to be executed, is already ok float *datainput = Input_layer->mutable_cpu_data ();  IMG-= Mean_;	Go to mean value//img.copyto (Cv::mat (Input_geometry_, Mean_.type (), datainput));		After the mean value is copied to the data blob, Method 1//memcpy (Datainput, Img.data, sizeof (float) *img.cols*img.rows); Go to the mean after the image data is copied to a data blob, Method 2//annotation img-= mean_;		These two sentences replace the above copy Cv::mat dataimg (Input_geometry_, CV_32FC1, datainput);	

	The image data was copied to the data blob after the mean value, Method 3 dataimg = Img-mean_; //After forward, only the first and last layers of BLOB data are preserved,
	That is, the Blob_by_name parameter can only be "data" and "prob" Auto T1 = Cv::gettickcount ();
	Net_->forward ();
	Auto T2 = Cv::gettickcount (); Std:: cout << "Forward Time:" << (T2-T1)/cv::gettickfrequency () * 1000 << "MS" << Std::endl; shared_ptr<caffe::blob> Output_layer = net_->blob_by_name ("prob"); Innerproduct output std::cout << "output layer shape:" << output_layer->shape_string () << Std::endl

	;
	Const float* BEGIN = Output_layer->cpu_data ();   Const float* END = begin +/*output_layer->channels () */output_layer->shape (1);		Only 2 D, shape (1,10) std::vector<float> outres = std::vector<float> (begin, end); Channels () with shape (1), also known as N*c*w*h C, although there is no W and H/////////////////////////////////////////////////////////////////////
	//Output layer results are sorted, print output//int N = 5;
	STD::VECTOR&LT;STD::p air<float, int> > pairs;

	for (size_t i = 0; i < outres.size (); ++i) Pairs.push_back (Std::make_pair (outres[i), static_cast<int> (i)));

	std::p artial_sort (Pairs.begin (), Pairs.begin () + N, Pairs.end (), paircompare); Std::vector<std::p air<std::string, float> > result;
	Std::cout << "=========== prediction ============" << Std::endl;
		for (int i = 0; i < N; ++i) {int idx = Pairs[i].second;
		Result.push_back (Std::make_pair (Labels_[idx], outres[idx));
	Std::cout << std::fixed << outres[idx] << "<< labels_[idx]<< Std::endl;
System ("pause"); }

The mean file is specified when the model is trained, so it is best to subtract the mean from the test and make the test results more accurate.

Test the picture (zoom in), minus the mean test results, not reducing the mean test results, in turn, as shown in the following figure:

Attachment: VS2015 compiled 64-bit Mini-caffe library download address http://download.csdn.net/download/wanggao_1990/10000792


5. Errors that may be encountered

When using the CUDNN version, you may experience errors, such as when using cudnn6.0, you will be prompted "Caffe Cudnnsetconvolution2ddescriptor Error:too few in function Call ... "error, view CUDNN discovery requires Datatype::type type of parameter. If you use CUDNN6, you need to modify the template types for all the call places, including header files and source files. It is recommended to use cudnn5.1.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.