Learning SIFT and SURF operators in feature2D in OpenCV for Feature Extraction and matching

Source: Internet
Author: User
This article briefly introduces how to use the SURF and SIFT operators to detect feature points, based on the detection, you can use the SIFT and SURF operators to extract features and use the matching function to match the features. The specific implementation is to first use SurfFeatureDetector to detect feature points, and then use SurfDescripto

This article briefly introduces how to use the SURF and SIFT operators to detect feature points, based on the detection, you can use the SIFT and SURF operators to extract features and use the matching function to match the features. The specific implementation is to first use SurfFeatureDetector to detect feature points, and then use SurfDescripto

Overview

In the previous article, the use of SURF and SIFT operators to detect feature points is described simply by using SIFT and SURF operators, based on the detection, you can use the SIFT and SURF operators to extract features and use the matching function to match the features. The specific implementation is to first use SurfFeatureDetector to detect the feature points, then use SurfDescriptorExtractor to calculate the feature vectors of the feature points, and finally use BruteForceMatcher or FlannBasedMatcher (the difference between the two) for feature point matching.

The experiment environment is opencv2.4.0 + vs2008 + win7. Note that in opencv2.4.X, SurfFeatureDetector is included in opencv2/nonfree/features2d. in hpp, BruteForceMatcher is included in opencv2/legacy. in hpp, FlannBasedMatcher is included in opencv2, features2d, and features2d. hpp.

BruteForce matching

First, use the BruteForceMatcher brute force matching method. The Code is as follows:

/*** @ Uses the SURF operator to detect feature points and extract features from them, use the BruteForce matching method to match feature points * @ SurfFeatureDetector + SurfDescriptorExtractor + BruteForceMatcher * @ author holybin */# include
 
  
# Include
  
   
# Include "opencv2/core. hpp "# include" opencv2/nonfree/features2d. hpp "// SurfFeatureDetector # include" opencv2/legacy. hpp "// BruteForceMatcher is actually in this header file // # include" opencv2/features2d/features2d. hpp "// FlannBasedMatcher # include" opencv2/highgui in this header file. hpp "using namespace cv; using namespace std; int main (int argc, char ** argv) {Mat src_1 = imread (" D: \ opencv_pic \ cat3d120.jpg ", CV_LOAD_IMAGE_GRAYSCALE); Mat src_2 = imread ("D: \ opencv_pic \ cat0.jpg", CV_LOAD_IMAGE_GRAYSCALE); if (! Src_1.data |! Src_2.data) {cout <"--(!) Error reading images "<
   
    
Keypoints_1, keypoints_2; detector. detect (src_1, keypoints_1); detector. detect (src_2, keypoints_2); cout <"img1 -- number of keypoints:" <
    
     
> Matcher; vector <DMatch> matches; matcher. match (descriptors_1, descriptors_2, matches); cout <"number of matches:" <
     
      

Experiment results:



FLANN Matching Method

The results of brute-force matching are not very good. The following uses FlannBasedMatcher to perform feature matching and only keeps the matching points. The Code is as follows:

/*** @ Uses the SURF operator to detect feature points and extract features from them, use the FLANN matching method to match feature points * @ SurfFeatureDetector + SurfDescriptorExtractor + FlannBasedMatcher * @ author holybin */# include
       
        
# Include
        
         
# Include "opencv2/core. hpp "# include" opencv2/nonfree/features2d. hpp "// SurfFeatureDetector is actually in this header file // # include" opencv2/legacy. hpp "// BruteForceMatcher # include" opencv2/features2d/features2d in this header file. hpp "// FlannBasedMatcher # include" opencv2/highgui in this header file. hpp "using namespace cv; using namespace std; int main (int argc, char ** argv) {Mat src_1 = imread (" D: \ opencv_pic \ cat3d120.jpg ", CV_LOAD_IMAGE_GRAYSCALE); Mat src_2 = imread ("D: \ opencv_pic \ cat0.jpg", CV_LOAD_IMAGE_GRAYSCALE); if (! Src_1.data |! Src_2.data) {cout <"--(!) Error reading images "<
         
          
Keypoints_1, keypoints_2; detector. detect (src_1, keypoints_1); detector. detect (src_2, keypoints_2); cout <"img1 -- number of keypoints:" <
          
           
AllMatches; matcher. match (descriptors_1, descriptors_2, allMatches); cout <"number of matches before filtering:" <maxDist) maxDist = dist;} printf ("max dist: % f \ n ", maxDist); printf ("min dist: % f \ n", minDist); // -- filter matching points and keep matching points (the standard is used here: distance <2 * minDist) vector <DMatch> goodMatches; for (int I = 0; I <descriptors_1.rows; I ++) {if (allMatches [I]. distance <2 * minDist) goodMatches. push_back (allMatches [I]);} cout <"number of matches after filtering:" <
           
            
(), DrawMatchesFlags: NOT_DRAW_SINGLE_POINTS // do not display unmatched points); imshow ("matching result", matchImg ); // -- output the corresponding relationship of matched points for (int I = 0; I <goodMatches. size (); I ++) printf ("good match % d: keypoints_1 [% d] -- keypoints_2 [% d] \ n", I, goodMatches [I]. queryIdx, goodMatches [I]. trainIdx); waitKey (0); return 0 ;}
           
          
         
        
       

Experiment results:



From the results of the second experiment, we can see that the number of feature points reduced from 49 to 33 after filtering, and the matching accuracy increased. Of course, you can also use the SIFT operator to perform the above two matching experiments. You only need to replace SurfFeatureDetector with SiftFeatureDetector and replace SurfDescriptorExtractor with SiftDescriptorExtractor.


Expansion

Based on the FLANN matching method, we can further use the perspective transform and spatial ing to find known objects (Target Detection). Specifically, the findHomography function is used to find the corresponding transformations Based on the matching key points, use the perspectiveTransform function to map the node group. For details, refer to this article: feature2D learning in OpenCV-SIFT and SURF algorithms for target detection.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.