These days continue to look at the Lowe of the Great God sift God, see the dizzy limbs cramp. Also drunk!!!! I really can't see it, let's have some dry goods. We know that OpenCV comes with a library of SIFT feature detection and match matching, which allows us to operate like fools. But the actual use of the time is not so simple. A typical OPENCV-based SIFT feature point extraction and matching routines are analyzed below, and a detailed description of the use of SIFT algorithms in OPENCV is presented.
The approximate process of extracting and matching sift feature points under OpenCV is as follows:
Read image-" feature point detection (position, angle, layer)-" Feature point description extraction (16*8-dimensional eigenvector)-"match-" display
Among them, feature point extraction mainly has two steps, see upstream Allan part. Detailed analysis is done below.
1. Use the OpenCV built-in library to read two pictures
2, generate a Siftfeaturedetector object, this object is the name of the SIFT feature detector, use it to detect clothes in the image of the SIFT point features, stored in a keypoint type of vector. Here it is necessary to say KEYPOINT data structure, involving a lot of content, specific analysis to view the OPENCV keypoint data structure analysis, which is said in the self-thinking of the detailed (table hit me ...). )。 In short, the most important point is:
KeyPoint just preserves some basic information about the feature points detected by the OpenCV Sift Library, but the feature vectors extracted by sift are not actually in this , the eigenvectors are extracted by siftdescriptorextractor, The results are placed in a mat data structure. This data structure really preserves the feature vectors corresponding to this feature point. The detailed description of the object generated by Siftdescriptorextractor is given in the following article.
Just because I didn't understand it, I missed the time of the morning. Cry to death!
3. Extract its eigenvector for all KeyPoint images:
Get KeyPoint just reach the key point of position, direction and other information, and there is no characteristic vector of this feature point, to extract the eigenvector is also to carry out siftdescriptorextractor work, established Siftdescriptorextractor After the object is sift, the feature points generated by the previous generation are traversed to find the 128-dimensional eigenvector corresponding to the feature point. Refer to the SIFT feature vector extraction work done by Siftdescriptorextractor in OpenCV for a simple analysis. With this step, the eigenvectors of all KeyPoint keys are saved in a mat data structure as a feature.
4. Match the feature vectors of two graphs to get the matching values.
Two pictures of the feature vector is extracted, we can use the Bruteforcematcher object to match two pictures of the descriptor to get the matching results to matches, which the specific matching method is not seen, over a period of time to fill up.
At this point, sift from the detection of feature points to the last match has been completed, although the matching part is not very understanding, only to use OPENCV to extract the SIFT features have a certain understanding. You can then start the next step of the work.
Attached: Using OPENCV Sift Library to do image matching routines
Opencv_empty_proj.cpp: Defines the entry point of the console application. #include "stdafx.h" #include <opencv2/opencv.hpp> #include <opencv2/features2d/features2d.hpp># include<opencv2/nonfree/nonfree.hpp> #include <opencv2/legacy/legacy.hpp> #include <vector>using Namespace Std;using namespace Cv;int _tmain (int argc, _tchar* argv[]) {Const char* imagename = "img.jpg";//Read image from File mat img = Imread (imagename); Mat img2=imread ("img2.jpg");//If the read-in image fails if (Img.empty ()) {fprintf (stderr, "Can not load Image%s\n", imagename); return-1;} if (Img2.empty ()) {fprintf (stderr, "Can not load Image%s\n", imagename); return-1;} Display image imshow ("Image Before", IMG), Imshow ("Image2 before", Img2),//sift feature detection Siftfeaturedetector siftdtc;vector< Keypoint>kp1,kp2;siftdtc.detect (IMG,KP1); Mat outimg1;drawkeypoints (IMG,KP1,OUTIMG1); Imshow ("Image1 keypoints", OUTIMG1); KeyPoint Kp;vector<keypoint>::iterator itvc;for (Itvc=kp1.begin (); Itvc!=kp1.end (); itvc++) {cout<< " Angle: "<<itvc->angle<<" \ T "<<itvc->class_id<< "\ t" <<itvc->octave<< "\ T" <<itvc->pt<< "\ T" <<itvc-> Response<<endl;} Siftdtc.detect (IMG2,KP2); Mat outimg2;drawkeypoints (IMG2,KP2,OUTIMG2); Imshow ("Image2 keypoints", OUTIMG2); Siftdescriptorextractor extractor; Mat Descriptor1,descriptor2; Bruteforcematcher<l2<float>> matcher;vector<dmatch> matches; Mat Img_matches;extractor.compute (Img,kp1,descriptor1); Extractor.compute (Img2,kp2,descriptor2); Imshow ("Desc", Descriptor1); Cout<<endl<<descriptor1<<endl;matcher.match (descriptor1,descriptor2,matches); Drawmatches (img,kp1,img2,kp2,matches,img_matches); Imshow ("matches", img_matches);//This function waits for the key, press any key on the keyboard to return to Waitkey (); return 0;}
OPENCV Sift algorithm using the method note