Learning Opencv--bow feature extraction function (feature point)

Source: Internet
Author: User
Tags scalar

From:http://www.xuebuyuan.com/582331.html

Simple method of classifying by feature points:

First, train

1. Extract the feature of +/-sample, sift the number of features extracted from each image (assuming that each feature has 128 dimensions)

2. Use the clustering method (e.g K-means) to set the indefinite number of feature clusters to a fixed number (for example, 10) words that is bow (bag of Word)

(This article mainly completes the above work!) )

3.normalize, and make the histogram of these 10 classes e.g [0.1,0.2,0.7,0 ... 0];

4. Enter the 10 word of each image as Feature_instance and (manually tagged) label (+/-) into the SVM training

Second, predict

1. Extraction of test_img feature (e.g. 137)

2. For each feature distance from 10 classes (e.g. 128 Euclidean distance), determine which class the feature belongs to

3. Normalize, and make a histogram of these 10 classes e.g [0,0.2,0.2,0.6,0 ... 0];

4. Applying svm_predict to predict results

realization of feature clustering by OpenCV BOW

First, the general functions of OPENCV and bow are described here.

The main common interfaces are:

1. Feature Point extraction

Ptr<featuredetector> featuredetector::create (const string& Detectortype)

According to the above interface, test different feature points:

The detection results of two images before and after horizontal flipping of the same image are detected,

The coordinate type of the detected feature point is: Pt:int/float (related to the nature of KeyPoint)

The numbers were NUM1, num2,

"FAST" –fastfeaturedetector pt:int (num1:615 num2:618)
"STAR" –starfeaturedetector pt:int (num1:43 num2:42)
"SIFT" –sift (nonfree module) pt:float (num1:155 num2:135)//must be initialized with Initmodule_nonfree ()
"SURF" –surf (nonfree module) pt:float (num1:344 num2:342)
Ibid.
"ORB" –orb pt:float (num1:496 num2:497)
"Mser" –mser pt:float (num1:51 num2:45)
"GFTT" –goodfeaturestotrackdetector pt:int (num1:744 num2:771)
"HARRIS" –goodfeaturestotrackdetector with Harris detector enabled Pt:float (num1:162 num2:160)
"Dense" –densefeaturedetector pt:int (num1:3350 num2:3350)

2. Feature Descriptor Extraction

Ptr<descriptorextractor> descriptorextractor::create (const string& Descriptorextractortype)

//  

3. Descriptor Matching

ptr<descriptormatcher> Descriptormatcher = descriptormatcher::create (const string& DescriptorMatcherType)

Descriptormatchertype–descriptor matcher type. Now the following matcher types is supported://Bruteforce (it uses L2)//BRUTEFORCE-L1//bruteforce-hamming//Bru Teforce-hamming (2)//flannbased ptr<descriptormatcher> Descriptormatcher = Descriptormatcher::create (" Bruteforce ");

4.class Bowtrainer

Class Bowkmeanstrainer: Training:p ublic bowtrainer:kmeans algorithm

Bowkmeanstrainer:: Bowkmeanstrainer(int clustercount, const termcriteria& Termcrit=termcriteria (), int attempts=3, int flags=kmeans_pp_centers

Parameter Same as
Kmeans

Code implementation:

1. Draw feature points.

2. Feature points Kmeans clusters, each of which represents a category.

 

#include "opencv2/highgui/highgui.hpp" #include "opencv2/calib3d/calib3d.hpp" #include "opencv2/imgproc/imgproc.hpp "#include" opencv2/features2d/features2d.hpp "#include" opencv2/nonfree/nonfree.hpp "#include <iostream>using namespace cv;using namespace std; #define CLUSTERNUM 10void drawandmatchkeypoints (const mat& img1,const mat& IMG2 , const vector<keypoint>& keypoints1,const vector<keypoint>& keypoints2,const Mat& Descriptors1,const mat& Descriptors2) {Mat keyp1,keyp2;drawkeypoints (Img1,keypoints1,keyp1,scalar::all (-1), 0); Drawkeypoints (Img2,keypoints2,keyp2,scalar::all ( -1), 0);p Uttext (keyP1, "drawkeypoints", Cvpoint (10,30), FONT_ Hershey_simplex, 1, scalar:: All ( -1));p Uttext (keyP2, "drawkeypoints", Cvpoint (10,30), Font_hershey_simplex, 1, scalar: : All ( -1)); Imshow ("Img1 keypoints", keyP1); Imshow ("Img2 keypoints", keyP2); ptr<descriptormatcher> Descriptormatcher = descriptormatcher::create ("Bruteforce");vector<DMatch> Matches;descriptormatCher->match (Descriptors1, Descriptors2, matches); Mat Show;drawmatches (Img1,keypoints1,img2,keypoints2,matches,show,scalar::all ( -1), Cv_rgb (255,255,255), Mat (), 4);  Puttext (Show, "Drawmatchkeypoints", Cvpoint (10,30), Font_hershey_simplex, 1, Scalar:: All (-1)); Imshow ("Match", show);} Test Opencv:class bowtrainervoid bowkeams (const mat& img, const vector<keypoint>& keypoints, const MAT & Descriptors, mat& centers) {//bow Kmeans algorithm clustering; Bowkmeanstrainer Bowk (Clusternum, Cvtermcriteria (Cv_termcrit_eps + Cv_termcrit_iter, ten, 0.1), 3,2); centers = Bowk.cluster (descriptors);cout<<endl<< "< cluster num:" <<centers.rows<< ">" << Endl ptr<descriptormatcher> Descriptormatcher = descriptormatcher::create ("Bruteforce");vector<DMatch> Matches;descriptormatcher->match (descriptors,centers,matches);//const mat& queryDescriptors, const Mat& Traindescriptors The first parameter is the node to be classified, the second parameter is the cluster center; Mat Democluster;img.copyto (Democluster);//For each class of KeypoiNT defines a color scalar color[]={cv_rgb (255,255,255), Cv_rgb (255,0,0), Cv_rgb (0,255,0), Cv_rgb (0,0,255), Cv_rgb (255,255,0), C V_rgb (255,0,255), Cv_rgb (0,255,255), Cv_rgb (123,123,0), Cv_rgb (0,123,123), Cv_rgb (123,0,123)};for (vector<DMatch >::iterator Iter=matches.begin (); Iter!=matches.end (); iter++) {cout<< "< descriptorsidx:" <<iter- >queryIdx<< "CENTERSIDX:" <<iter->trainIdx<< "Distincs:" <<iter->distance<< ">" <<endl; Point center= keypoints[iter->queryidx].pt;circle (democluster,center,2,color[iter->trainidx],-1);} Puttext (Democluster, "keypoints Clustering: a color represents a type", Cvpoint (10,30), Font_hershey_simplex, 1, Scalar:: All (-1)); Imshow ("Keypoints clusrtering", Democluster);} int main () {cv::initmodule_nonfree ();//Sift/surf Create must be preceded by initmodule_<modulename> (); cout << < Creating detector, descriptor extractor and descriptor matcher ... "; ptr<featuredetector> detector = featuredetector::create ("SIFT"); ptr<descriptorextractor> descriptorextractor = descriptorextractor::create ("SIFT"); ptr<descriptormatcher> Descriptormatcher = descriptormatcher::create ("Bruteforce"); cout << ">" < < Endl;if (Detector.empty () | | Descriptorextractor.empty ()) {cout << "Can not create detector or descriptor EXSTR Actor or descriptor Matcher of given types "<< endl;return-1;} cout << Endl << < Reading images ... << Endl; Mat IMG1 = Imread ("d:/demo0.jpg"); Mat Img2 = Imread ("d:/demo1.jpg");cout<<endl<< ">" <<endl;//detect keypoints;cout << Endl << "< extracting keypoints from images ..." << endl;vector<keypoint> keypoints1,keypoints2; Detector->detect (IMG1, Keypoints1);d etector->detect (Img2, keypoints2); cout << "IMG1:" << Keypoints1.size () << "points Img2:" <<keypoints2.size () << "points" << Endl << ">" &lt ;< Endl;//compute descriptors for KeypoiNts;cout << "< Computing descriptors for keypoints from images ..." << Endl; Mat Descriptors1,descriptors2;descriptorextractor->compute (IMG1, Keypoints1, descriptors1); Descriptorextractor->compute (Img2, Keypoints2, descriptors2);cout<<endl<< "< Descriptoers Size:" <<descriptors2.size () << ">" <<endl;cout<<endl<< "Descriptor's col:" << descriptors2.cols<<endl<< "Descriptor's Row:" <<descriptors2.rows<<endl;cout << "> The process of matching the << Endl;//draw and match Img1,img2 keypoints//is to match;drawandmatchkeypoints the descriptors of the feature points (IMG1,IMG2, KEYPOINTS1,KEYPOINTS2,DESCRIPTORS1,DESCRIPTORS2); Mat center;//extracts feature points to img1, and clusters//tests Opencv:class bowtrainerbowkeams (img1,keypoints1,descriptors1,center); Waitkey ();}

Implementing Drawkeypoints via QT:

void qt_test1::on_drawkeypoints_clicked () {//initmodule_nonfree (); ptr<featuredetector> detector = featuredetector::create ("FAST");vector<keypoint> keypoints;detector- >detect (SRC, keypoints); Mat drawkeyp;drawkeypoints (Src,keypoints,drawkeyp,scalar::all ( -1), 0);p Uttext (DRAWKEYP, "drawkeypoints", CvPoint ( 10,30), Font_hershey_simplex, 0.5, Scalar:: All (255)); Cvtcolor (DRAWKEYP, image, Cv_rgb2rgba); Qimage img = qimage ((const unsigned char*) (image.data), Image.cols, Image.rows, QIMAGE::FORMAT_RGB32); Qlabel *label = new Qlabel (this), Label->move (50, 50);//The position of the image in the window; Label->setpixmap (IMG); Label->resize (Label->pixmap ()->size ()); Label->show ();}

Since Initmodule_nonfree () is always in error, unable to extract sift and surf feature points,

And it is not possible to achieve clustering because the Kmeans algorithm that runs the/bow cluster: Bowkmeanstrainer bowk (Clusternum, Cvtermcriteria (cv_termcrit_eps + cv_termcrit_iter, 10 , 0.1), 3,2); always wrong, don't know how to solve ~~~~~ (>_<) ~ ~ ~ need to continue to learn

Learning Opencv--bow feature extraction function (feature point)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.