Android學習九---OpenCV4android org.opencv.feature2d

來源:互聯網
上載者:User

標籤:

     不管是在識別,配准等應用中,提取映像的特徵都是很關鍵的一環,提取特徵是先找出映像的關鍵點(如角點,邊緣點等),然後用描述子來描述這些點,最後整幅映像就可以表示成一個特徵向量,特徵向量就可以利用在後續識別中。

    這個流程在matlab,opencv中都有相應的函數來實現,matlab封裝性好,比較簡單,只要輸入映像和參數,調用函數就能夠得到特徵,而opencv就稍微複雜點,我們先通過opencv的c++程式來瞭解這個過程(資料比較好找點),接著通過閱讀opencv4android文檔來瞭解提供的API,最後實現在Android找關鍵點並畫圖。

一、OpenCV c++映像配准程式來瞭解

映像配准程式包括1.定義儲存關鍵點和關鍵點描繪子的資料結構。2.定義要提取的特徵,分別對兩幅映像進行關鍵點檢測。3.計算關鍵點的特徵描述子。4.計算匹配點數並顯示。

#include <opencv2/core/core.hpp>#include <opencv2/highgui/highgui.hpp>#include <opencv2/nonfree/features2d.hpp>#include <opencv2/legacy/legacy.hpp>using namespace cv;int main(){Mat image1=imread("../b1.png");Mat image2=imread("../b2.png");// 檢測surf特徵點vector<KeyPoint> keypoints1,keypoints2;SurfFeatureDetector detector(400);detector.detect(image1, keypoints1);detector.detect(image2, keypoints2);// 描述surf特徵點SurfDescriptorExtractor surfDesc;Mat descriptros1,descriptros2;surfDesc.compute(image1,keypoints1,descriptros1);surfDesc.compute(image2,keypoints2,descriptros2);// 計算匹配點數BruteForceMatcher<L2<float>>matcher;vector<DMatch> matches;matcher.match(descriptros1,descriptros2,matches);std::nth_element(matches.begin(),matches.begin()+24,matches.end());matches.erase(matches.begin()+25,matches.end());// 畫出匹配圖Mat imageMatches;drawMatches(image1,keypoints1,image2,keypoints2,matches,imageMatches,Scalar(255,0,0));namedWindow("image2");imshow("image2",image2);waitKey();return 0;}
 

 

二、轉換到OpenCV java下

    首先瞭解OpenCV java 特徵提取的API和相應的資料結構。有關特徵提取的API是存在包Package org.opencv.features2d中,主要包含以下幾個類

1.DescriptorExtractor

    計算特徵點的特徵描述子的抽象類別。

    主要使用兩個methods:

1.1 create

    建立特徵描述子

    Usage :public static DescriptorExtractor create(int extractorType)

    ExtractorType:

· "SIFT" -- "SIFT"

· "SURF" -- "SURF"

· "BRIEF" -- "BriefDescriptorExtractor"

· "BRISK" -- "BRISK"

· "ORB" -- "ORB"

· "FREAK" -- "FREAK"

   Example:

DescriptorExtractor descriptor=DescriptorExtractor.create(DescriptorExtractor.SIFT);

1.2 compute

    提取特徵描述子

Usage:public void compute(java.util.List<Mat> images,

java.util.List<MatOfKeyPoint> keypoints,

java.util.List<Mat> descriptors)

public void compute(Mat image,

MatOfKeyPoint keypoints,

Mat descriptors)

image – 輸入的映像.

keypoints – 輸入的關鍵點,由FeatureDetector得到。

descriptors – 計算出來的特徵描述子

Example:descriptor.compute(mRgba, keypoint, mask);

2. FeatureDetector

    用來提取二維映像特徵點的類

    主要是兩個Methods

2.1 create

     Usage :public static FeatureDetector create(int detectorType)

DetectorType:

"FAST" -- "FastFeatureDetector"

"STAR" -- "StarFeatureDetector"

"SIFT" -- "SIFT" (nonfree module)

"SURF" -- "SURF" (nonfree module)

"ORB" -- "ORB"

"BRISK" -- "BRISK"

"MSER" -- "MSER"

"GFTT" -- "GoodFeaturesToTrackDetector"

"HARRIS" -- "GoodFeaturesToTrackDetector" with Harris detector enabled

"Dense" -- "DenseFeatureDetector"

"SimpleBlob" -- "SimpleBlobDetector"

    Example:FeatureDetector detector = FeatureDetector.create(FeatureDetector.MSER);

2.2 detect

   Usage:

public void detect(java.util.List<Mat> images,

java.util.List<MatOfKeyPoint> keypoints,

java.util.List<Mat> masks)

public void detect(Mat image,

MatOfKeyPoint keypoints,

Mat mask)

public void detect(Mat image,

MatOfKeyPoint keypoints,)

image –輸入映像.

keypoints – 檢測得到的關鍵點

mask – 模板,指定要取關鍵點度的位置. It must be a 8-bit integer matrix with non-zero values in the region of interest.

3. KeyPoint

    採用detect檢測得到關鍵點的資料結構

    包括:一些建構函式和相應的域,建構函式不介紹,介紹對應的有哪些域。

angle

public float angle

計算特徵點的方向.

class_id

public int class_id

物體的標籤,可以用來分類關鍵點屬於哪個類。

octave

public int octave

關鍵點使在金字塔的哪一層被提取的.

pt

public Point pt

關鍵點的座標

response

public float response

響應,對應於哪個關鍵點的位置。.

size

public float size

有用關鍵點的鄰接地區的半徑。

4. DescriptorMatcher

    同前面的特徵提取,需要create和match

4.1 create

   public static DescriptorMatcher create(int matcherType)

   採用預設參數建立一個特徵描述子匹配

matcherType:模式比對的方式,演算法

static int BRUTEFORCE

static int BRUTEFORCE_HAMMING

static int BRUTEFORCE_HAMMINGLUT

static int BRUTEFORCE_L1

static int BRUTEFORCE_SL2

static int FLANNBASED

4.2 Match

   match

public void match(Mat queryDescriptors,

Mat trainDescriptors,

MatOfDMatch matches)

Finds the best match for each descriptor from a query set.

給定查詢集合中的每個特徵描述子,尋找 首選

Parameters:

queryDescriptors -特徵描述子查詢集.

trainDescriptors - 待訓練(模板)的特徵描述子集. 這個集沒被載入到類的對象中.

matches –匹配點數. 匹配點數的大小小於待查詢的特徵描述子的個數。

knnMatch

public void knnMatch(Mat queryDescriptors,

Mat trainDescriptors,

java.util.List<MatOfDMatch> matches,

int k,

Mat mask,

boolean compactResult)

給定查詢集合中的每個特徵描述子,尋找 k個首選.

radiusMatch

public void radiusMatch(Mat queryDescriptors,

java.util.List<MatOfDMatch> matches,

float maxDistance)

對於每一個查詢特徵描述子, 在特定距離範圍內尋找特徵描述子.

5. DMatch

   儲存匹配特徵的資料結構

float distance

兩個特徵向量之間的歐氏距離,越小表明匹配度越高。

int imgIdx

訓練映像的索引(若有多個)

int queryIdx

此匹配對應的查詢映像的特徵描述子索引

int trainIdx

此匹配對應的訓練(模板)映像的特徵描述子索引

三、完整的C++符合代碼

  來自http://blog.csdn.net/masibuaa/article/details/8998601

 

 

 

 

#include "opencv2/highgui/highgui.hpp"#include "opencv2/imgproc/imgproc.hpp"#include "opencv2/nonfree/nonfree.hpp"#include "opencv2/nonfree/features2d.hpp"#include <iostream>#include <stdio.h>#include <stdlib.h>using namespace cv;using namespace std;int main(){initModule_nonfree();//初始化模組,使用SIFT或SURF時用到Ptr<FeatureDetector> detector = FeatureDetector::create( "SIFT" );//建立SIFT特徵檢測器Ptr<DescriptorExtractor> descriptor_extractor = DescriptorExtractor::create( "SIFT" );//建立特徵向量產生器Ptr<DescriptorMatcher> descriptor_matcher = DescriptorMatcher::create( "BruteForce" );//建立特徵匹配器if( detector.empty() || descriptor_extractor.empty() )cout<<"fail to create detector!";//讀入映像Mat img1 = imread("desk.jpg");Mat img2 = imread("desk_glue.jpg");//特徵點檢測double t = getTickCount();//當前滴答數vector<KeyPoint> keypoints1,keypoints2;detector->detect( img1, keypoints1 );//檢測img1中的SIFT特徵點,儲存到keypoints1中detector->detect( img2, keypoints2 );cout<<"映像1特徵點個數:"<<keypoints1.size()<<endl;cout<<"映像2特徵點個數:"<<keypoints2.size()<<endl;//根據特徵點計算特徵描述子矩陣,即特徵向量矩陣Mat descriptors1,descriptors2;descriptor_extractor->compute( img1, keypoints1, descriptors1 );descriptor_extractor->compute( img2, keypoints2, descriptors2 );t = ((double)getTickCount() - t)/getTickFrequency();cout<<"SIFT演算法用時:"<<t<<"秒"<<endl;cout<<"映像1特徵描述矩陣大小:"<<descriptors1.size()<<",特徵向量個數:"<<descriptors1.rows<<",維數:"<<descriptors1.cols<<endl;cout<<"映像2特徵描述矩陣大小:"<<descriptors2.size()<<",特徵向量個數:"<<descriptors2.rows<<",維數:"<<descriptors2.cols<<endl;//畫出特徵點Mat img_keypoints1,img_keypoints2;drawKeypoints(img1,keypoints1,img_keypoints1,Scalar::all(-1),0);drawKeypoints(img2,keypoints2,img_keypoints2,Scalar::all(-1),0);//imshow("Src1",img_keypoints1);//imshow("Src2",img_keypoints2);//特徵匹配vector<DMatch> matches;//匹配結果descriptor_matcher->match( descriptors1, descriptors2, matches );//匹配兩個映像的特徵矩陣cout<<"Match個數:"<<matches.size()<<endl;//計算匹配結果中距離的最大和最小值//距離是指兩個特徵向量間的歐式距離,表明兩個特徵的差異,值越小表明兩個特徵點越接近double max_dist = 0;double min_dist = 100;for(int i=0; i<matches.size(); i++){double dist = matches[i].distance;if(dist < min_dist) min_dist = dist;if(dist > max_dist) max_dist = dist;}cout<<"最大距離:"<<max_dist<<endl;cout<<"最小距離:"<<min_dist<<endl;//篩選出較好的匹配點vector<DMatch> goodMatches;for(int i=0; i<matches.size(); i++){if(matches[i].distance < 0.31 * max_dist){goodMatches.push_back(matches[i]);}}cout<<"goodMatch個數:"<<goodMatches.size()<<endl;//畫出匹配結果Mat img_matches;//紅色串連的是匹配的特徵點對,綠色是未匹配的特徵點drawMatches(img1,keypoints1,img2,keypoints2,goodMatches,img_matches,Scalar::all(-1)/*CV_RGB(255,0,0)*/,CV_RGB(0,255,0),Mat(),2);imshow("MatchSIFT",img_matches);waitKey(0);return 0;}
四、Android實現

待更新

Android學習九---OpenCV4android org.opencv.feature2d

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.