TINY-CNN use of Open source libraries (MNIST)

Source: Internet
Author: User

TINY-CNN is a CNN-based open Source library whose license is the BSD 3-clause. The author has also been maintaining the update, which is helpful for further mastering CNN, so the following is the compilation and use of tiny-cnn in Windows7 64bit vs2013.

1. Download the source code from HTTPS://GITHUB.COM/NYANP/TINY-CNN:

$ git clone https://github.com/nyanp/tiny-cnn.git version number is 77d80a8, updated on 2016.01.22

2. The source file already contains the VS2013 project, Vc/tiny-cnn.sln, the default is Win32, Examples/main.cpp need OPENCV support, here a new x64 console project TINY-CNN;

3. Follow the source project and add the corresponding. h file to the new console project, adding a new Test_tiny-cnn.cpp file;

4. Copy the code from the Test.cpp and train.cpp files in the examples/mnist to the Test_tiny-cnn.cpp file;

#include <iostream> #include <string> #include <vector> #include <algorithm> #include <tiny_ Cnn/tiny_cnn.h> #include <opencv2/opencv.hpp>using namespace tiny_cnn;using namespace tiny_cnn::activation ;//Rescale output to 0-100template <typename activation>double Rescale (double x) {Activation a;return 100.0 * (x-a . scale (). First)/(A.scale (). Second-a.scale (). first);} void Construct_net (Network<mse, adagrad>& nn), void train_lenet (std::string data_dir_path);//Convert Tiny_ Cnn::image to Cv::mat and Resizecv::mat image2mat (image<>& img); void convert_image (const std::string& Imagefilename, double MINV, double maxv, int w, int h, vec_t& data), void recognize (const std::string& dictionary, Const std::string& filename, int target); int main () {//trainstd::string data_path = "D:/download/mnist"; train_lenet (Data_path);//teststd::string Model_path = "D:/download/mnist/lenet-weights"; std::string image_path = "D:/Download/ mnist/"; int target[10] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9};for (int i = 0; I < 10; i++) {char ch[15];sprintf (CH, "%d", i), std::string str;str = std::string (ch), str + = ". png"; str = image_path + Str;recogniz E (Model_path, str, target[i]);} Std::cout << "ok!" << Std::endl;return 0;} void Train_lenet (std::string data_dir_path) {//Specify loss-function and learning Strategynetwork<mse, adagrad> Nn;construct_net (NN); Std::cout << "Load models ..." << std::endl;//load MNIST datasetstd::vector<label_ T> Train_labels, test_labels;std::vector<vec_t> train_images, Test_images;parse_mnist_labels (data_dir_ Path + "/train-labels.idx1-ubyte", &train_labels);p arse_mnist_images (Data_dir_path + "/train-images.idx3-ubyte" , &train_images, -1.0, 1.0, 2, 2);p arse_mnist_labels (Data_dir_path + "/t10k-labels.idx1-ubyte", &test_labels); Parse_mnist_images (Data_dir_path + "/t10k-images.idx3-ubyte", &test_images, -1.0, 1.0, 2, 2); Std::cout << " Start Training "<< Std::endl;progress_display disp (train_images.size ()); timer T;int minibatch_size = 10;int Num_epochs = 30; Nn.optimizer (). Alpha *= std::sqrt (minibatch_size);//create Callbackauto On_enumerate_epoch = [&] () {Std::cout < < t.elapsed () << "s elapsed." << Std::endl;tiny_cnn::result res = nn.test (test_images, test_labels); std:: cout << res.num_success << "/" << res.num_total << Std::endl;disp.restart (train_images.size ()) ; T.restart ();}; Auto On_enumerate_minibatch = [&] () {disp + = minibatch_size;};/ /Trainingnn.train (Train_images, Train_labels, Minibatch_size, Num_epochs,on_enumerate_minibatch, On_enumerate_ Epoch), Std::cout << "end training." << std::endl;//test and show Resultsnn.test (Test_images, Test_labels). Print_detail (std::cout);//Save Networksstd::ofstream OFS ("d:/download/mnist/lenet-weights"); ofs << nn;} void Construct_net (Network<mse, adagrad>& nn) {//Connection table [Y.lecun, 1998 TABLE.1] #define O truE#define X falsestatic const BOOL Tbl[] = {o, x, x, X, O, O, O, x, X, O, O, O, O, x, O, O,o, O, x, x, X, O, O, O, x, X, O, O, O, O, x, O,o, O, O, x, x, X, O, O, O, x, X, O, x, O, O, o,x, O, O, O, x, X, O, O, O, O, x, X, O, x, O, o,x, X, O, O, O , x, X, O, O, O, O, x, O, O, x, o,x, x, X, O, O, O, x, X, O, O, O, O, x, O, O, o}; #undef o#undef x//construct Netsnn < < Convolutional_layer<tan_h> (5, 1, 6)//C1, [email protected], [email protected]<< Ave Rage_pooling_layer<tan_h> (6, 2)//S2, [email protected], [email protected]<< Convolutional_layer<tan_h> (5, 6, 16,connection_table (TBL, 6,))//C3, [email protected] , [email protected]<< average_pooling_layer<tan_h> (Ten, 2)//S4, [email protected], [ email protected]<< convolutional_layer<tan_h> (5, 5, 5, +)//C5, [email protected], [email& nbsp;protected]<< fully_connected_layer&Lt;tan_h> (120, 10); F6, 120-in, 10-out}void recognize (const std::string& dictionary, const std::string& filename, int target) {NETW Ork<mse, adagrad> nn;construct_net (NN);//Load Netsstd::ifstream IFS (DICTIONARY.C_STR ()); IFS >> nn;// Convert ImageFile to vec_tvec_t data;convert_image (filename, -1.0, 1.0, +, +, data);//Recognizeauto res = nn.predict (da TA); STD::VECTOR&LT;STD::p air<double, int> > scores;//sort & print top-3for (int i = 0; i < ten; i++) scores . Emplace_back (Rescale<tan_h> (res[i]), i); Std::sort (Scores.begin (), Scores.end (), STD::GREATER&LT;STD::p Air <double, int>> ()); for (int i = 0; i < 3; i++) std::cout << scores[i].second << "," << scores[  I].first << std::endl;std::cout << "The actual digit is:" << scores[0].second << ", correct digit is: "<<target<<std::endl;//visualize outputs of each layer//for (size_t i = 0; I < nn.depth (); i++) {//auto out_img =Nn[i]->output_to_image ();//cv::imshow ("Layer:" + std::to_string (i), Image2mat (out_img));//}////Visualize Filter Shape of First convolutional layer//auto weight = nn.at<convolutional_layer<tan_h>> (0). Weight_to_image () ;//cv::imshow ("Weights:", Image2mat (weight));//cv::waitkey (0);} Convert Tiny_cnn::image to Cv::mat and Resizecv::mat image2mat (image<>& img) {cv::mat ori (img.height (), IMG. Width (), cv_8u, &img.at (0, 0)); Cv::mat resized;cv::resize (Ori, resized, Cv::size (), 3, 3, Cv::inter_area); return resized;} void Convert_image (const std::string& imagefilename,double minv,double maxv,int w,int h,vec_t& data) {Auto IMG = C V::imread (Imagefilename, Cv::imread_grayscale); if (img.data = = nullptr) return; Cannot open, or it's not a imagecv::mat_<uint8_t> resized;cv::resize (IMG, resized, Cv::size (W, h));//Mnist data Set is ' White on black ', so negate requiredstd::transform (Resized.begin (), Resized.end (), Std::back_inserter (data), [=] ( uint8_t c) {RETurn (255-c) * (MAXV-MINV)/255.0 + MINV; });}

5. A few errors will be prompted at compile time, and the workaround is:

(1), Error C4996, workaround: Add the macro _scl_secure_no_warnings to the properties of the preprocessor definition;

(2), call for_ function, error C2668, call the overloaded function unknown teach, workaround: The third parameter in the For_ is forced into the size_t type;

6. Run the program, train the results as shown:


7. Test the generated model and generate an image with a total of 10 images for each number using the drawing tool, such as:


The 10 images are identified by the model generated when the train is imported, and the results are identified as 6 and 9 are mistaken for 5 and 1:


TINY-CNN use of Open source libraries (MNIST)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.