(reproduced) using the SIFT and RANSAC algorithms (OPENCV framework) to achieve object detection and positioning, and to find the transformation matrix (Findfundamentalmat and findhomography comparison) pinned

Source: Internet
Author: User
Tags scalar

Original link: 46914837#commentsedit

The goal of this paper is to use SIFT and RANSAC algorithm, to complete the correct matching of feature points, and to find the transformation matrix, through the transformation matrix to identify the boundary of the object (the article has some source code, the whole project I also uploaded, please click here).

Sift algorithm is currently recognized as the most effective feature point detection algorithm, about the algorithm is not much to say, there are many online information, here to provide two links, one is the translation of the original Sift, one is about the SIFT algorithm detailed explanation:

Sift algorithm translation

Sift algorithm detailed

The entire implementation process can be restated as follows: To provide two initial pictures, a template image, a test picture, the purpose is to test the object in the template picture, the object is detected in the picture, and show the exact location and size of the object, test the image of the object position and size, has been marked with a white square.

First, the SIFT algorithm is used to extract the feature points for two images, and the results are as follows: (Sift feature extraction method is used in the above link "sift algorithm detailed" in the code provided)

Then the feature points to match, according to the SIFT algorithm original author's idea, each feature point to produce a 128-dimensional vector, calculate the Euclidean distance between vectors, using the nearest way to complete the match, if the nearest distance is less than the last close to 0.8, it is considered to be a correct match, Otherwise, the match is not considered successful. As a result of this match, the following conditions are:

You can see that there are still a lot of wrong matches, so try to use the RANSAC algorithm to eliminate error matching and try to eliminate the error match using the Findfundamentalmat function in OpenCV:

By using the Findfundamentalmat function, the function returns a 3*3 matrix, at first I think the matrix is a transformation matrix, as long as the point in the left is multiplied with this transformation matrix, you can get the corresponding point in the right image. But that's not true.

There is a misunderstanding here that the Findfundamentalmat function can really use the Ransac method to eliminate the error match, from the name can be found that the function is to return the underlying matrix, the basic matrix and transformation matrix is two different concepts. The underlying matrix description is the correspondence between the pixels in the three-dimensional scene (in fact, so far the basic matrix of the function has been found to have a hair, I do not know). So, if you use this function, the experiment will be able to do this, and you can't do it any further.

So, in order to get the transformation matrix, later I found that OPENCV also has a function findhomography, this function is really the function of the computation transformation matrix, its function return value is the real transformation matrix.

In fact, this problem has plagued me for a long time, on the elimination of the error matching method, the most online check out is through the Findfundamentalmat function, so I take it for granted that the function's return value is a transformation matrix. And the online about findhomography introduction is relatively few, so will let people misunderstand findfundamentalmat will calculate the transformation matrix.

Try to return the matrix with the Findhomography function, in the template image, the object is already marked with a green box outline, according to the object's four boundary points, and the transformation matrix, you can get the transformed object contour of the four boundary points, the boundary point is connected to the object contour, As shown (the green box is the outline of the pre-labeled Template object, the white box is the outline of the pre-labeled test picture, and the red box is the contour computed after the transformation matrix is transformed by the green box):

As can be seen from the results, this is the more correct result.

The main code in the

Experiment is as follows (this is the main code, the SIFT algorithm and some other functional functions I have written in other files):

#include <math.h> #include <time.h> #include <windows.h> #include <iostream> using namespace s  td  #include <cv.h> #include 

  

(reproduced) using the SIFT and RANSAC algorithms (OPENCV framework) to achieve object detection and positioning, and to find the transformation matrix (Findfundamentalmat and findhomography comparison) pinned

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.