Recognition Algorithm Overview:
Sift/Surf is based on grayscale images,
1. First, create an image pyramid to form a three-dimensional image space. Use the Hessian matrix to obtain the local maximum value of each layer, and then perform NMS at 26 points around the Extreme Point, in this way, a rough feature point is obtained, and the layer (scale) of the precise feature point is obtained by quadratic interpolation, that is, the scale is not changed.
2
Today begins to knock the code part.Part1:1.Sift feature extraction.Img1_feat = Cvcloneimage (IMG1);//copy Figure 1, deep copy, used to draw feature pointsImg2_feat = Cvcloneimage (IMG2);//copy Figure 2, deep copy, used to draw feature points//The default extraction is the SIFT feature point in the Lowe format//extracts and displays the feature points on the 1th pictureN1 = Sift_features (IMG1, AMP;FEAT1);/
Reprint Please specify Source: http://blog.csdn.net/luoshixian099/article/details/47606159The following previous article has introduced sift principle and C source code analysis , a series of feature points are obtained, and each feature point corresponds to a 128-dimensional vector. If there are now two images that have been extracted to feature points, the only thing to do now is to match similar feature points. There are two basic ways of similarit
Index of Articles in the sift principle and source code analysis series: Middleware. To achieve image rotation immutability, a value must be assigned to the feature point direction based on the Local Image structure of the detected key points. That is, the alcorientationhist () Statement seen in the findscalespaceextrema () function:
// Calculate the gradient histogram float omax = calcorientationhist (gauss_pyr [O * (noctavelayers + 3) + layer], poin
"Sift principle and Source Analysis" series Article index:http://www.cnblogs.com/tianyalu/p/5467813.htmlScale space theory the object in nature with observationscaleThere are different forms of expression. For example, we describe buildings with "meters", observing molecules, atoms, etc. using "nano". More image examples such as Google Maps, sliding the mouse wheel can change the scale of the observation map, see the map drawing is also different, as
Http://blog.csdn.net/txdb/archive/2009/07/15/4350631.aspx
The first time I came into contact with sift was a year ago. At that time, I was eager to work out my graduation thesis and felt that sift was too difficult to understand. Switch to the neural network.
Looking back, we found that the sift data is still the same scarce. Fortunately, there is code, and the
In general, pattern recognition is divided into two steps: Train predict.After careful explanation and careful reading of siftdemov4 code, the two sections are explained as follows:(The feature mentioned below refers to the sift feature)
I. Train
1. Extract +/-sample feature. The number of SIFT features extracted from each image is not fixed (assuming each feature has 128 dimensions)
2. Use the clustering
Some of the functions in opencv3.x are changed:1. SIFT: can use Help (cv2.xfeatures2d) query2.drawKeypoints: Also use the Help () method to queryOpencv3 version Sift,surf and other unstable algorithmic functions are placed in the contrib version of opencv3.x. The module https://www.lfd.uci.edu/~gohlke/pythonlibs/#Python 3.6.2 | Anacondaimport cv2 Import numpy as NP #read image img = cv2.imread (R ' test
OpenCV Getting Started-extracting sift feature vectorsIn order to ensure the rotation invariance, it will focus on the key point, establish the axis in the direction of the key point, not alone to examine the single key point, but need a neighborhood. The direction of each cell in the neighborhood represents the gradient direction of the pixel, the length is the gradient modulus size, the gradient direction histogram is calculated in 8 directions on e
1-
The local characteristics of the image, the rotation, scale scaling, brightness changes remain unchanged, the angle of view changes, affine transformation, noise also maintain a certain degree of stability.
The uniqueness is good, the information is rich, is suitable for the massive characteristic library to carry on the fast, the accurate match.
Multi-volume, even a few objects can produce a large number of SIFT characteristics
High
First up.Which is the logo mark,is the picture to be detected.The code is as follows.#coding =utf-8import cv2import scipy as Spimg1 = Cv2.imread (' x1.jpg ', 0) # QueryImageimg2 = Cv2.imread (' x2.jpg ', 0) # Trainim age# Initiate SIFT detectorsift = Cv2. SIFT () # Find the keypoints and descriptors with SIFTkp1, des1 = Sift.detectandcompute (img1,none) kp2, des2 = Sift.detectand Compute (img2,none) # FLANN
Original: http://blog.csdn.net/abcjennifer/article/details/7639681
SIFT (scale-invariant feature transform) is an algorithm for detecting local features by finding feature points in a graph (Interest points,or corner points) and their related scale and The description of orientation gets the feature and matches the image feature points, obtains the good effect, the detailed analysis is as follows:
Algorithm Description
/*If you give two pictures, there are similarities in the middle. Required to make a match. What to do.Can I tell you now?For example, give two pictures, first find out the sift point. Scale space extremum detection.The key point orientation of Gaussian fuzzy key point locating key points description Kdtree and BBF optimal node priority algorithm to match two picture feature points, will cover some incorrect matching points RANSAC random sampling cons
OpenCV Getting Started-extracting sift key pointsIn the content-based image retrieval, the local invariant feature of the image is relative to the global feature, the local feature can powerfully describe the feature of the image, it has important significance, and in many gray-based local feature extraction algorithm sift has the best effect, the concrete principle depends on the Lowe paper, Below use Open
Some functions in opencv3.1 have changed:
1. SIFT: You can use Help (cv2.xfeatures2d) query
2.drawKeypoints: Also using the Help () method query,
Import Cv2Import NumPy as NP
#read imageimg = cv2.imread (' test.jpg ', Cv2. Imread_color)Gray = Cv2.cvtcolor (img, Cv2. Color_bgr2gray)Cv2.imshow (' Origin ', img)
#SIFTsift= Cv2.xfeatures2d.SIFT_create ()Keypoints = Sift.detect (Gray, None)
#kp, des = Sift.detectandcompute (Gray,none) #des是描述子, for match,
So, your used the vlfeat SIFT successfuly in Matlab but your need to use the "library with C + + and you can" t find the Functi ONS reference nor a tutorial? The I have been there, done so and sharing the code for integrating Vlfeat ' s SIFT with OpenCV.
Actually I harvest the below code from the ' Toolbox ' folder of Vlfeat, from the source of the dot Mex files. With minor modification the ' vlsift ' fun
In terms of image search, the sift algorithm is used as a basic feature. Problems with personal feelings are recorded here:
1. The main direction phase is too dependent on the Gradient Direction of the partial pixels of the image, which may make the finding of the main direction inaccurate. However, the feature vectors and matching are heavily dependent on the main direction, if there is any deviation, the effect will be significantly reduced.
2
The steps of extracting sift feature in OPENCV
Use Siftfeaturedetector's Detect method to detect features in a vector and use drawkeypoints to identify them in a diagram
Siftdescriptorextractor's Compute method extracts a feature descriptor, which is a matrix
Match the descriptor with the match Matcher, and the matching result is stored in a vector of dmatch.
Set the distance threshold so that the matching vector distance is less
Gaussian Blur is one of the many fuzzy algorithms, the so-called blur, smoothing the image, eliminating the difference between pixels, the most easily thought of is the mean value smoothing.1. Mean value BlurThe mean Blur is the average of the pixels around the target pixels. For example,The pixel matrix.|1|1|1||1|2|1||1|1|1|,After the mean value is blurred, it becomes|1|1|1||1|1|1||1|1|1|,This blurs the difference between pixels, but it is obviously flawed because the farther away the pixel has
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.