SVM Multi-classification-SVM classification opencv3.0 Source code

Source: Internet
Author: User
Tags scalar svm rbf kernel

The theory knowledge of SVM see some summarization and cognition of SVM--entry level

Before always thought, using SVM to do the classification, is not to use multiple SVM classification, please shape similar to a binary tree, as follows:

That is, all samples are treated as inputs, in which the positive sample of the first classifier svm_1 is the sample belonging to Class 1, and the negative sample is the rest of the remaining samples, which is called a pair of other methods, which, although the training time is relatively fast in principle, But it brings a series of questions:

1. It is possible for a sample to be divided into positive (negative) samples in a partial classifier (more than one, for example, 2 SVM, or more extreme cases in all classifiers), simply by saying that there is more than one classifier claiming that it belongs to itself, then an error occurs. Conversely, if there is not a classifier claiming that it belongs to a classifier, then there is also a false classification,

2. Once the classification is mistaken, the results are classified by mistake.

3. data set tilt . This problem is the most impact of the training of the performance of the classifier, that is, when training one of the trainers, only a certain class as a positive sample, the remaining categories as negative samples, so that the number of positive and negative samples of a serious imbalance.

Its advantages are obviously weaker than that of determination, so this situation is undesirable in practical applications.

The second approach: one-to-one

In practical application, the first method obtained the model accuracy is not high, the training time also does not dominate. So how is this one-to-one approach going on? As the name implies: training a number of two classification SVM, that is, any two classes to train a SVM, this will have a problem, such as: I have 4 kinds of samples, according to this method of multi-classification, it is necessary to train 6 SVM, extended to K categories will need K (k-1)/2 classifier. But how did he classify it?

Simply put, vote. All classifiers make a classification (prediction) of the sample, resulting in the most categorical results, then divide the sample into this category. The advantage of this is that the sample will have a predictive value without any categorical predictions. The current popular SVM Toolkit libsvm--Taiwan University. Chih-chung Chung and Chih-jen Lin are used as SVM Multi-classification method.

Third method: DAG SVM

The shape of the structure is as follows:

In this category, we can first ask the classifier 1v5 (meaning it can answer "is the 1th class or the 5th class"), if the answer is 5, that is, go left, and then ask "is 2 or 5", so keep asking, the benefit is actually, we in the classification, actually just call 4 classifiers. It takes less time, and there is no classification overlap or non-categorization.

Now the Dag Method root node selection (that is, how to choose the first classifier to participate in the classification), there are some ways to improve the overall effect, we always want the root node to make fewer mistakes, so participate in the first classification of the two categories, preferably the difference is particularly big, so big that it is not likely to divide them wrong , or we will always take the two categories of the highest correct rate of the classifier as the root node, or we have two categories of classifiers in the classification, not only the output category of labels, but also output a similar "confidence" of the east, when it is not very confident about their results, we will not only follow its output go, Take the road next to it, and so on.

----------------------OpenCV3.0 SVM Classification code-------------

The following is the SVM classification code on opencv3.0+vs2013 platform (see my other blog post win7 platform vs2013 configuration OpenCV3.0)

[CPP] View plain copy #include  <opencv2/core.hpp>   #include  <opencv2/imgproc.hpp>    #include   "opencv2/imgcodecs.hpp"    #include  <opencv2/highgui.hpp>   # include <opencv2/ml.hpp>      using namespace std;   using  namespace cv;   using namespace cv::ml;      int main (int ,  char**)    {       // Data for visual  representation       int width = 512, height = 512;        mat image = mat::zeros (HEIGHT,&NBSP;WIDTH,&NBSP;CV_8UC3);           // Set up training data        //! [setup1]       int labels[4] = { 1 , &NBsp;-1, -1, 1 };       float trainingdata[4][2] = {  { 1, 2 }, { -1, -10 }, { 1, -2 }, { 2,  1 } };       //! [setup1]        //! [setup2]       mat trainingdatamat (4,&NBSP;2,&NBSP;CV_32FC1,  trainingdata);       mat labelsmat (4, 1, cv_32sc1,  Labels);       //! [setup2]              // Train the SVM       //! [init]          ptr<svm> svm = svm::create ();        svm->settype (svm::c_svc);       svm->setkernel (SVM::LINEAR);    &NBSp;   svm->settermcriteria (Termcriteria (termcriteria::max_iter, 100, 1e-6));        //! [init]       //! [train]       svm->train (Trainingdatamat, row_sample, labelsmat);        //! [train]          // Show the  decision regions given by the svm       //! [show]        vec3b green (0, 255, 0),  blue (255, 0, 0);        for  (int i = 0; i < image.rows;  ++i)            for  (int j = 0; &NBSP;J&NBSP;&LT;&NBSP;IMAGE.COLS;&NBSP;++J)            {            Mat sampleMat =  (mat_<float> (1, 2)  <<  j, i);           float response =  Svm->predict (Samplemat);              if  ( response == 1)                 Image.at<vec3b> (i, j)  = green;            else if  (response == -1)                 image.at<Vec3b> (i, j)  = blue;            }       //! [show]           // Show the training data       //! [ Show_data]       int thickness = -1;       int  lineType = 8;       circle (Image, point (1, 2),  5,  scalar (0, 0, 0),  thickness, linetype);       circle ( Image, point ( -1, -10),  5, scalar (255, 255, 255),  thickness, linetype);        circle (Image, point (1, -2),  5, scalar (255, 255,  255),  thickness, linetype);       circle (Image, Point (2,  1),  5, scalar (255, 255, 255),  thickness, linetype);        //! [show_data]          // Show  support vectors       //! [show_vectors]        thickness = 2;       lineType = 8;   &NBSP;&NBSP;&NBSP;&NBSP;MAT&NBSP;SV  = svm->getsupportvectors ();          for  (int i  = 0; i < sv.rows; ++i)        {           const float* v = sv.ptr<float> (i);            circle (Image, point ((int) v[0],  (int) v[1] ),  6, scalar (128, 128, 128),  thickness, linetype);        }       //! [show_vectors]           Mat res;       float teatdata[1][2] = { { 1,  -11 } };       mat query (1, 2, cv_32fc1,  Teatdata);               svm->predict (query, res);        cout << res;       imwrite (" Result.png ",  image);     // save the image           imshow ("Svm simple example",  image);   //  show it to the user       waitkey (0);      }  

It should be explained that: in OpenCV, the SVM Multi-classification method is hidden parameters, when the function Svm->train () is called, when the function parameter is defined, the direct input of the multi-category sample and its label can be


Original address: http://blog.csdn.net/coder_oyang/article/details/47301647


Multiple classifiers are frequently encountered in object recognition, and SVM is a relatively mature and straightforward idea. In general, the use of SVM as a multi-classifier mainly has the following ideas:

One-to-many (One-vs-all). Training n SVM by using the target category as a positive sample and the remaining samples as negative samples. This was introduced in the class of Andrew Ng's machine leaning.
Disadvantage: Because the training set is a 1:n situation, there is a large bias, not particularly practical.

One on one (One-vs-one). During training, a SVM is trained between any two classes of samples, and the N class is trained to (n-1) n/2 a SVM. At runtime, a method of voting is used to classify an unknown sample. LIBSVM is the method that is used.
Disadvantage: When the category is many, the (n-1) n/2 a support vector machine, the computational cost is great.

Hierarchical support vector machines. All categories are first categorized as two subclasses, and subclasses are further divided into two subclasses until they are separated by a subclass. Like a tree yes. For details, please refer to: Liu Zhigang, Li Deeren, Qinghingqing, et. The generalization of support vector machine in multi-class classification problem [J]. 2004.

Dag-svms. The ddag of the decision-oriented circular graph proposed by Platt is directed against the phenomenon of "one to the other" SVMs the existence of false points and rejects. Please refer to the paper for a simple example

#include <opencv2/core.hpp> #include <opencv2/imgproc.hpp> #include "opencv2/imgcodecs.hpp" #include <
Opencv2/highgui.hpp> #include <opencv2/ml.hpp> using namespace CV;


using namespace cv::ml;
    Vec3b Getrandomcolor () {rng rng (clock ());
Return vec3b (Rng.next ()% 255, rng.next ()% 255, rng.next ()% 255);
    } int main (int, char**) {//Data for visual representation int width =, height = 512;

    Mat image = Mat::zeros (height, width, cv_8uc3);
    Set up training data int labels[4] = {1, 2, 3, 4};

    Float Trainingdata[4][2] = {{100, 10}, {10, 500}, {500, 10}, {500, 500}};
    Mat Trainingdatamat (4, 2, CV_32FC1, trainingdata);

    Mat Labelsmat (4, 1, CV_32SC1, labels); Train the SVM//!
    [Init]
    ptr<svm> SVM = Svm::create ();
    Svm->settype (SVM::C_SVC);
    Svm->setkernel (SVM::P oly);
    Svm->setdegree (1.0);
    Svm->settermcriteria (Termcriteria (Termcriteria::max_iter, 1e-6)); //!
   [Init] //!
[Train]//Svm->train (Trainingdatamat, Row_sample, Labelsmat);
ptr<traindata> auto_train_data = traindata::create (Trainingdatamat, Row_sample, LabelsMat);
    Svm->trainauto (Auto_train_data);
    Svm->train (Trainingdatamat, Row_sample, Labelsmat); //! [Train]//Show The decision regions given by the SVM//!
    [Show]

    VEC3B Green (0,255,0), Blue (255,0,0), Red (0,0,255), yellow (0,255,255); for (int i = 0, i < image.rows; ++i) {for (int j = 0; j < Image.cols; ++j) {Mat Samplemat = (mat
            _<float> << j,i);
            float response = svm->predict (Samplemat);
            Double ratio = 0.5;
            if (response = = 1) image.at<vec3b> (i,j) = Green*ratio;
            else if (response = = 2) image.at<vec3b> (i,j) = Blue*ratio;
            else if (response = = 3) {image.at<vec3b> (i,j) = Red*ratio;
 }else if (response = = 4) {               Image.at<vec3b> (i,j) = Yellow*ratio;
    }}} int thickness =-1;
    int linetype = 8;
    Circle (Image, point (+), 5, Scalar (0,255,0), thickness, linetype);
    Circle (Image, point (Ten), 5, Scalar (255,0,0), thickness, linetype);
    Circle (Image, point (N), 5, Scalar (0,0,255), thickness, linetype);
    Circle (Image, point (N), 5, Scalar (0,255,255), thickness, linetype);
    thickness = 2;
    Linetype = 8;
    Mat SV = svm->getsupportvectors ();

    Std::cout << sv << Std::endl;
        for (int i = 0; i < sv.rows; ++i) {Const float* v = sv.ptr<float> (i);
    Circle (image, point (int) v[0], (int) v[1]), 6, Cv_rgb (128, 128, 128), 2);        } imwrite ("Result.png", image); Save the image imshow ("SVM simple Example", image);
Show it to the user Waitkey (0); }
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83

Effect:

Note the point:
Using the RBF kernel or using Autotrain, parameter selection is important. No, you have to try yo ...

Original address: http://blog.csdn.net/heroacool/article/details/50997024

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.