OpenCV Getting Started-extracting sift feature vectors
In order to ensure the rotation invariance, it will focus on the key point, establish the axis in the direction of the key point, not alone to examine the single key point, but need a neighborhood. The direction of each cell in the neighborhood represents the gradient direction of the pixel, the length is the gradient modulus size, the gradient direction histogram is calculated in 8 directions on each 4x4 small block, and the accumulated value of each direction is counted to form a seed point. David G.lowe recommends that each key point be described using 4x4=16 seed points, each seed point contains 8 directional information, so a key point generates 16x8=128 information, forming a 128-dimensional sift eigenvectors. The following uses OPENCV to find a feature vector for a picture.
#include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> #include <opencv2/nonfree/ features2d.hpp>//#include <iostream>using namespace cv;using namespace std;int main (int argc, const char *argv[ ] { const Cv::mat input = Cv::imread ("Input.jpg", 0);//load as grayscale cv::mat descriptors; ptr<descriptorextractor> extractor = descriptorextractor::create ("SIFT"); Cv::siftfeaturedetector detector; Vector<cv::keypoint> keypoints; Detector.detect (input, keypoints); Extractor->compute (input, keypoints, descriptors); cout << descriptors.rows << ":" << descriptors.cols << Endl; Too many bits //cout << descriptors << Endl; return 0;}
Through the results can see a total of 266 key points, can be lost to see.
Reference:
1.OPENCV 2.4.10 Reference
OpenCV Getting Started-extracting sift feature vectors