The surf algorithm is a highly efficient variant of the well-known scale invariant feature detector sift (scale-invariant Features Transform), which defines the position and scale for each detected feature, where the value of the scale can be used to define the window size around the feature point, Makes each feature point unique. Here is the use of the surf algorithm to extract the feature point descriptors in two images, and call the function in the OpenCV to match, and finally output a visual result, the development platform for the qt5.3.2+opencv2.4.9. The following steps are given for image matching:
One, input two images, using OPENCV in the Cv::featuredetector interface to achieve surf feature detection, in the actual debugging to change the threshold value can obtain a different test results:
// 设置两个用于存放特征点的向量 std::vector<cv::KeyPoint> keypoint1; std::vector<cv::KeyPoint> keypoint2; // 构造SURF特征检测器 cv::SurfFeatureDetector surf(3000// 阈值 // 对两幅图分别检测SURF特征 surf.detect(image1,keypoint1); surf.detect(image2,keypoint2);
Second, OpenCV 2.0 introduced a general class for extracting different feature point descriptors. Here we construct a surf descriptor extractor, the result of which is a matrix with the same number of rows as the number of elements in the feature point vector. Each row is a vector of n-dimensional descriptors. In the surf algorithm, the default description sub-dimension is 64, which depicts the intensity pattern around the feature point. the more similar two feature points are, the closer their eigenvectors are, so these descriptors are useful in image matching:
cv::SurfDescriptorExtractor surfDesc; // 对两幅图像提取SURF描述子 cv::Mat descriptor1, descriptor2; surfDesc.compute(image1,keypoint1,descriptor1); surfDesc.compute(image2,keypoint2,descriptor2);
After extracting two images of their respective feature point descriptors, a comparison (matching) is required. You can call the class Cv::bruteforcematcher in OpenCV to construct a matching device. Cv::bruteforcematcher is a class CV: A subclass of the:D Escriptormatcher that defines a common interface for different matching policies, and returns a CV::D match vector, which will be used to represent a pair of matching descriptors. (For Cv::bruteforcematcher Please refer to: Http://blog.csdn.net/panda1234lee/article/details/11094483?utm_source=tuicool)
Third, in a number of feature points matching results to filter out the score (or distance) the most ideal 25 matching results, which is achieved through std::nth_element.
void nth_element(_RandomAccessIterator _first, _RandomAccessIterator _nth, _RandomAccessIterator _last)
The function acts as a binary ordering of the elements from _first to _last that the iterator points to, with the _nth as the boundary, the front is smaller (large) than the _nth, and later is larger (small), so it is suitable for finding the top n maximum (minimum) elements.
The final step is to visualize the matching results. OPENCV provides a drawing function to produce an image stitched together by two input images, while the matching points are connected by a straight line:
// 以下操作将匹配结果可视化 cv::Mat imageMatches; cv::drawMatches(image1,keypoint1, // 第一张图片和检测到的特征点 image2,keypoint2, // 第二张图片和检测到的特征点 matches, // 输出的匹配结果 imageMatches, // 生成的图像 cv::Scalar(128,128,128// 画直线的颜色
Note that SIFT, surf functions are stored in the Features2d,cv::bruteforcematcher module in the nonfree module of OpenCV instead of the legacy class, so the function needs to include the header file:
#include <opencv2/legacy/legacy.hpp>#include <opencv2/nonfree/nonfree.hpp>
The complete code is as follows:
#include <QCoreApplication>#include <opencv2/core/core.hpp>#include <opencv2/highgui/highgui.hpp>#include <opencv2/legacy/legacy.hpp>#include <opencv2/nonfree/nonfree.hpp>#include <QDebug>intMainintargcChar*argv[]) {qcoreapplication A (argc, argv);//Below two figure ratio //Enter two images to matchCv::mat image1= Cv::imread ("c:/fig12.18 (A1). jpg",0); Cv::mat image2= Cv::imread ("c:/fig12.18 (A2). jpg",0);if(!image1.data | |!image2.data) qdebug () <<"error!"; Cv::namedwindow ("Right Image"); Cv::imshow ("Right Image", Image1); Cv::namedwindow ("left Image"); Cv::imshow ("left Image", Image2);//vector for storing feature points STD:: vector<cv::KeyPoint>Keypoint1;STD:: vector<cv::KeyPoint>Keypoint2;//Structure surf feature detectorCv::surffeaturedetector Surf ( the);//Threshold value //Two images to detect surf features separatelySurf.detect (IMAGE1,KEYPOINT1); Surf.detect (Image2,keypoint2);//Output two images with detailed feature point informationCv::mat Imagesurf; CV::d rawkeypoints (Image1,keypoint1, Imagesurf, Cv::scalar (255,255,255), CV::D rawmatchesflags::D raw_rich_keypoints); Cv::namedwindow ("Right SURF Features"); Cv::imshow ("Right SURF Features", Imagesurf); CV::d rawkeypoints (Image2,keypoint2, Imagesurf, Cv::scalar (255,255,255), CV::D rawmatchesflags::D raw_rich_keypoints); Cv::namedwindow ("left SURF Features"); Cv::imshow ("left SURF Features", Imagesurf);//Construct Surf Description sub-ExtractorCv::surfdescriptorextractor Surfdesc;//Two images extracted surf descriptorsCv::mat Descriptor1, Descriptor2; Surfdesc.compute (Image1,keypoint1,descriptor1); Surfdesc.compute (IMAGE2,KEYPOINT2,DESCRIPTOR2);//Construction of the matching devicecv::bruteforcematcher< cv::l2<float> > Matcher;//Match the descriptors of two pictures, select only 25 Best Bets STD:: vector<CV::D match>Matches Matcher.match (Descriptor1, Descriptor2, matches);STD:: Nth_element (Matches.begin (),//Initial positionMatches.begin () + -,//position of the sort elementMatches.end ());//End position //Remove all elements after 25-bitMatches.erase (Matches.begin () + -, Matches.end ());//The following actions will visualize the matching resultsCv::mat imagematches; CV::d rawmatches (Image1,keypoint1,//First picture and detected feature pointsImage2,keypoint2,//second picture and detected feature pointsMatches//Output matching resultsImagematches,//Generated imagesCv::scalar ( -, -, -));//Draw a line of colorCv::namedwindow ("Matches");//, cv_window_normal);Cv::imshow ("Matches", imagematches);returnA.exec ();}
Effect one, because the plane in the original image of the edge of the jagged, so just look at the corner, the matching effect is good:
Effect two, does not involve the rotation and deformation of the image, but the image is scaled to match, the result is naturally very good:
Effect of three, with two different angles of the image taken to match, and some of the features point matching deviation, the overall effect is good, in the debugging process can also be adjusted through the parameters to obtain a better matching effect.
Resources:
Http://blog.sina.com.cn/s/blog_a98e39a201017pgn.html
Http://www.cnblogs.com/tornadomeet/archive/2012/08/17/2644903.html (Introduction to the theory of surf algorithm)
http://blog.csdn.net/liyuefeilong/article/details/44166069
OpenCV2 Study Notes (13): Using Surf to match feature points of different images