[Feature matching] The Sub-principle and source code analysis of BRIEF feature description.

Source: Internet
Author: User

[Feature matching] The Sub-principle and source code analysis of BRIEF feature description.

Related: Fast principle and source code analysis

Harris principle and source code analysis

Principle and source code analysis of SIFT

Principle and source code analysis of SURF

Reprinted please indicate the source: http://blog.csdn.net/luoshixian099/article/details/48338273

Traditional feature points are described as SIFT and SURF. Each feature point is described using a 128-dimensional (SIFT) or 64-dimensional (SURF) vector. Each dimension occupies 4 bytes, SIFT requires 128 × 4 = 512 bytes of memory, and SURF requires 256 bytes. If the memory resources are limited, this sub-method is obviously not suitable. At the same time, it is time-consuming to form a descriptive sub-statement. Later, someone proposed the PCA dimensionality reduction method, but it did not solve the problem of time-consuming computing description.

In view of the shortcomings mentioned above, Michael Calonder and others proposed BRIEF's method to describe feature points (BRIEF: Binary Robust Independent Elementary Features ). BRIEF descriptors use binary code strings (each digit is not 1 or 0) as the descriptive subvectors. In this paper, we consider the length of several hundred thousand, and at the same time, it is simple to describe the subalgorithms, because the binary string is used, the Hamming distance is used for matching (the minimum replacement times required for one string to be changed to another ). However, because the BRIEF descriptor does not have directionality, the rotation of the large angle will have a great impact on matching.

BRIRF only proposes a method to describe the feature points. Therefore, the detection part of the feature points must be combined with other methods, such as SIFT and SURF, because it can better reflect the advantages of Brirf, such as high speed.

Bytes --------------------------------------------------------------------------------------------------

BRIEF describes the sub-Principle in three steps. The N-length binary code string is used as the sub-description (occupied memory N/8 ):

1. Take the feature point P as the center and take a Patch neighborhood of S × S size;

2. Random N points in this neighborhood, and then perform Gaussian smoothing on these 2 × N points respectively. Define a tau test to compare the gray value of N pairs of pixels;


3. Finally, the N binary code strings obtained in step 2 form an N-dimensional vector;

Bytes -----------------------------------------------------------------------------------------------------

Principle Analysis:

_ 1. before performing the Tau test, Gaussian smoothing is required for random points. Compared with a single pixel gray value, Gaussian smoothing is sensitive to noise, make the description more stable. 9 × 9 kernal is recommended in this paper.

_ 2. In this paper, we use different methods in 5 to test random N points. g ii is recommended in this paper:


G I :( X, Y )~ (-S/2, S/2) Distribution. X and Y are evenly distributed;

G ii:, X, Y are subject to Gaussian distribution;

G iii: first random X points, then X points as the center, take Y points;

G iv: Random 2N points in space quantization polar coordinate;

G v: X is fixed in the center. In the Patch, Y is used to obtain all possible values in the polar coordinate system;

_ 3. Calculate the Hamming distance to directly compare the distance between two binary code strings. The distance is defined as the minimum operation required to change one string to another. Therefore, it is faster than Euclidean distance.

If N = 128 is used, 128/8 = 16 bytes of memory size is required for each feature.



OPENCV source code parsing:

# Include <stdio. h> # include <iostream> # include "cv. h "# include" opencv2/highgui. hpp "# include" opencv2/core. hpp "# include" opencv2/features2d/features2d. hpp "# include" opencv2/nonfree. hpp "using namespace std; using namespace cv; int main (int argc, char ** argv) {Mat img_1 = imread (" F: \ Picture \ book.jpg ", CV_LOAD_IMAGE_GRAYSCALE); Mat img_2 = imread ("F: \ Picture \ book_2.jpg", CV_LOA D_IMAGE_GRAYSCALE); if (! Img_1.data |! Img_2.data) {return-1;} // -- Step 1: Detect the keypoints using SURF Detectorint minHessian = 400; SurfFeatureDetector detector (minHessian); // use the Surf feature to Detect std :: vector <KeyPoint> keypoints_1, keypoints_2; detector. detect (img_1, keypoints_1); detector. detect (img_2, keypoints_2); // -- Step 2: Calculate descriptors (feature vectors) BriefDescriptorExtractor extractor (64); // The parameter indicates the number of bytes, it is represented by a vector with a length of 64 × 8 = 512. For details, see the following analysis: Mat descriptors_1, descriptors_2; extractor. compute (img_1, keypoints_1, descriptors_1); extractor. compute (img_2, keypoints_2, descriptors_2); // -- Step 3: Matching descriptor vectors with a brute force matcherBFMatcher matcher (NORM_HAMMING); // link between Chinese and English distance Matching feature std :: vector <DMatch> matches; matcher. match (descriptors_1, descriptors_2, matches); // -- Draw matchesMat img_matches; drawMatches (img_1, keypoints_1, img_2, keypoints_2, matches, img_matches ); /// -- Show detected matchesimshow ("Matches", img_matches); waitKey (0); return 0 ;}

Brief describes the sub-class definition:

Note that the bytes parameter indicates that the number of bytes occupied by the description sub-length is not the description sub-length. For example, the default 32-byte description sub-length is 32 × 8 = 256;

/** BRIEF Descriptor */class CV_EXPORTS BriefDescriptorExtractor: public DescriptorExtractor {public: static const int PATCH_SIZE = 48; // The neighboring range is static const int KERNEL_SIZE = 9; // smooth integral core size // bytes is a length of descriptor in bytes. it can be equal 16, 32 or 64 bytes. briefDescriptorExtractor (int bytes = 32); // The number of bytes occupied is 32, and the length of the corresponding description is 32 × 8 = 256; virtual void read (const FileNode &); virtual void write (FileStorage &) const; virtual int descriptorSize () const; virtual int descriptorType () const; // @ todo read and write for brief AlgorithmInfo * info () const; protected: virtual void computeImpl (const Mat & image, vector <KeyPoint> & keypoints, Mat & descriptors) const; // calculate the feature description sub-function typedef void (* PixelTestFn) (const Mat &, const vector <KeyPoint> &, Mat &); // description of sub-calls with different lengths int bytes _; // number of bytes occupied PixelTestFn test_fn _;};

Calculate feature description sub-functions:

Void BriefDescriptorExtractor: computeImpl (const Mat & image, std: vector <KeyPoint> & keypoints, Mat & descriptors) const {// Construct integral image for fast smoothing (box filter) mat sum; Mat grayImage = image; if (image. type ()! = CV_8U) cvtColor (image, grayImage, CV_BGR2GRAY); // TODO allow the user to pass in a precomputed integral image // if (image. type () = CV_32S) // sum = image; // else integral (grayImage, sum, CV_32S ); // calculate the integral image // Remove keypoints very close to the border KeyPointsFilter: runByImageBorder (keypoints, image. size (), PATCH_SIZE/2 + KERNEL_SIZE/2); // remove the corner points that fall outside the boundary. descriptors = Mat: zeros (int) keypoints. size (), bytes _, CV_8U); test_fn _ (sum, keypoints, descriptors); // calculate the feature description}

For the smoothing of random points, Gaussian smoothing is not used in the paper, but the integral and substitution in the random point neighborhood, which can also reduce the effect of noise:

inline int smoothedSum(const Mat& sum, const KeyPoint& pt, int y, int x){    static const int HALF_KERNEL = BriefDescriptorExtractor::KERNEL_SIZE / 2;    int img_y = (int)(pt.pt.y + 0.5) + y;    int img_x = (int)(pt.pt.x + 0.5) + x;    return   sum.at<int>(img_y + HALF_KERNEL + 1, img_x + HALF_KERNEL + 1)           - sum.at<int>(img_y + HALF_KERNEL + 1, img_x - HALF_KERNEL)           - sum.at<int>(img_y - HALF_KERNEL, img_x + HALF_KERNEL + 1)           + sum.at<int>(img_y - HALF_KERNEL, img_x - HALF_KERNEL);}
Describes the formation of sub-vectors (taking the length of 16 bytes × 8 = 128 as an example ):

Each element of the array des occupies one byte. The source code location is:... \ modules \ features2d \ src \ generated_16. I

// Code generated with '$ scripts/generate_code.py src/test_pairs.txt 16'#define SMOOTHED(y,x) smoothedSum(sum, pt, y, x)    desc[0] = (uchar)(((SMOOTHED(-2, -1) < SMOOTHED(7, -1)) << 7) + ((SMOOTHED(-14, -1) < SMOOTHED(-3, 3)) << 6) + ((SMOOTHED(1, -2) < SMOOTHED(11, 2)) << 5) + ((SMOOTHED(1, 6) < SMOOTHED(-10, -7)) << 4) + ((SMOOTHED(13, 2) < SMOOTHED(-1, 0)) << 3) + ((SMOOTHED(-14, 5) < SMOOTHED(5, -3)) << 2) + ((SMOOTHED(-2, 8) < SMOOTHED(2, 4)) << 1) + ((SMOOTHED(-11, 8) < SMOOTHED(-15, 5)) << 0));    desc[1] = (uchar)(((SMOOTHED(-6, -23) < SMOOTHED(8, -9)) << 7) + ((SMOOTHED(-12, 6) < SMOOTHED(-10, 8)) << 6) + ((SMOOTHED(-3, -1) < SMOOTHED(8, 1)) << 5) + ((SMOOTHED(3, 6) < SMOOTHED(5, 6)) << 4) + ((SMOOTHED(-7, -6) < SMOOTHED(5, -5)) << 3) + ((SMOOTHED(22, -2) < SMOOTHED(-11, -8)) << 2) + ((SMOOTHED(14, 7) < SMOOTHED(8, 5)) << 1) + ((SMOOTHED(-1, 14) < SMOOTHED(-5, -14)) << 0));    desc[2] = (uchar)(((SMOOTHED(-14, 9) < SMOOTHED(2, 0)) << 7) + ((SMOOTHED(7, -3) < SMOOTHED(22, 6)) << 6) + ((SMOOTHED(-6, 6) < SMOOTHED(-8, -5)) << 5) + ((SMOOTHED(-5, 9) < SMOOTHED(7, -1)) << 4) + ((SMOOTHED(-3, -7) < SMOOTHED(-10, -18)) << 3) + ((SMOOTHED(4, -5) < SMOOTHED(0, 11)) << 2) + ((SMOOTHED(2, 3) < SMOOTHED(9, 10)) << 1) + ((SMOOTHED(-10, 3) < SMOOTHED(4, 9)) << 0));    desc[3] = (uchar)(((SMOOTHED(0, 12) < SMOOTHED(-3, 19)) << 7) + ((SMOOTHED(1, 15) < SMOOTHED(-11, -5)) << 6) + ((SMOOTHED(14, -1) < SMOOTHED(7, 8)) << 5) + ((SMOOTHED(7, -23) < SMOOTHED(-5, 5)) << 4) + ((SMOOTHED(0, -6) < SMOOTHED(-10, 17)) << 3) + ((SMOOTHED(13, -4) < SMOOTHED(-3, -4)) << 2) + ((SMOOTHED(-12, 1) < SMOOTHED(-12, 2)) << 1) + ((SMOOTHED(0, 8) < SMOOTHED(3, 22)) << 0));    desc[4] = (uchar)(((SMOOTHED(-13, 13) < SMOOTHED(3, -1)) << 7) + ((SMOOTHED(-16, 17) < SMOOTHED(6, 10)) << 6) + ((SMOOTHED(7, 15) < SMOOTHED(-5, 0)) << 5) + ((SMOOTHED(2, -12) < SMOOTHED(19, -2)) << 4) + ((SMOOTHED(3, -6) < SMOOTHED(-4, -15)) << 3) + ((SMOOTHED(8, 3) < SMOOTHED(0, 14)) << 2) + ((SMOOTHED(4, -11) < SMOOTHED(5, 5)) << 1) + ((SMOOTHED(11, -7) < SMOOTHED(7, 1)) << 0));    desc[5] = (uchar)(((SMOOTHED(6, 12) < SMOOTHED(21, 3)) << 7) + ((SMOOTHED(-3, 2) < SMOOTHED(14, 1)) << 6) + ((SMOOTHED(5, 1) < SMOOTHED(-5, 11)) << 5) + ((SMOOTHED(3, -17) < SMOOTHED(-6, 2)) << 4) + ((SMOOTHED(6, 8) < SMOOTHED(5, -10)) << 3) + ((SMOOTHED(-14, -2) < SMOOTHED(0, 4)) << 2) + ((SMOOTHED(5, -7) < SMOOTHED(-6, 5)) << 1) + ((SMOOTHED(10, 4) < SMOOTHED(4, -7)) << 0));    desc[6] = (uchar)(((SMOOTHED(22, 0) < SMOOTHED(7, -18)) << 7) + ((SMOOTHED(-1, -3) < SMOOTHED(0, 18)) << 6) + ((SMOOTHED(-4, 22) < SMOOTHED(-5, 3)) << 5) + ((SMOOTHED(1, -7) < SMOOTHED(2, -3)) << 4) + ((SMOOTHED(19, -20) < SMOOTHED(17, -2)) << 3) + ((SMOOTHED(3, -10) < SMOOTHED(-8, 24)) << 2) + ((SMOOTHED(-5, -14) < SMOOTHED(7, 5)) << 1) + ((SMOOTHED(-2, 12) < SMOOTHED(-4, -15)) << 0));    desc[7] = (uchar)(((SMOOTHED(4, 12) < SMOOTHED(0, -19)) << 7) + ((SMOOTHED(20, 13) < SMOOTHED(3, 5)) << 6) + ((SMOOTHED(-8, -12) < SMOOTHED(5, 0)) << 5) + ((SMOOTHED(-5, 6) < SMOOTHED(-7, -11)) << 4) + ((SMOOTHED(6, -11) < SMOOTHED(-3, -22)) << 3) + ((SMOOTHED(15, 4) < SMOOTHED(10, 1)) << 2) + ((SMOOTHED(-7, -4) < SMOOTHED(15, -6)) << 1) + ((SMOOTHED(5, 10) < SMOOTHED(0, 24)) << 0));    desc[8] = (uchar)(((SMOOTHED(3, 6) < SMOOTHED(22, -2)) << 7) + ((SMOOTHED(-13, 14) < SMOOTHED(4, -4)) << 6) + ((SMOOTHED(-13, 8) < SMOOTHED(-18, -22)) << 5) + ((SMOOTHED(-1, -1) < SMOOTHED(-7, 3)) << 4) + ((SMOOTHED(-19, -12) < SMOOTHED(4, 3)) << 3) + ((SMOOTHED(8, 10) < SMOOTHED(13, -2)) << 2) + ((SMOOTHED(-6, -1) < SMOOTHED(-6, -5)) << 1) + ((SMOOTHED(2, -21) < SMOOTHED(-3, 2)) << 0));    desc[9] = (uchar)(((SMOOTHED(4, -7) < SMOOTHED(0, 16)) << 7) + ((SMOOTHED(-6, -5) < SMOOTHED(-12, -1)) << 6) + ((SMOOTHED(1, -1) < SMOOTHED(9, 18)) << 5) + ((SMOOTHED(-7, 10) < SMOOTHED(-11, 6)) << 4) + ((SMOOTHED(4, 3) < SMOOTHED(19, -7)) << 3) + ((SMOOTHED(-18, 5) < SMOOTHED(-4, 5)) << 2) + ((SMOOTHED(4, 0) < SMOOTHED(-20, 4)) << 1) + ((SMOOTHED(7, -11) < SMOOTHED(18, 12)) << 0));    desc[10] = (uchar)(((SMOOTHED(-20, 17) < SMOOTHED(-18, 7)) << 7) + ((SMOOTHED(2, 15) < SMOOTHED(19, -11)) << 6) + ((SMOOTHED(-18, 6) < SMOOTHED(-7, 3)) << 5) + ((SMOOTHED(-4, 1) < SMOOTHED(-14, 13)) << 4) + ((SMOOTHED(17, 3) < SMOOTHED(2, -8)) << 3) + ((SMOOTHED(-7, 2) < SMOOTHED(1, 6)) << 2) + ((SMOOTHED(17, -9) < SMOOTHED(-2, 8)) << 1) + ((SMOOTHED(-8, -6) < SMOOTHED(-1, 12)) << 0));    desc[11] = (uchar)(((SMOOTHED(-2, 4) < SMOOTHED(-1, 6)) << 7) + ((SMOOTHED(-2, 7) < SMOOTHED(6, 8)) << 6) + ((SMOOTHED(-8, -1) < SMOOTHED(-7, -9)) << 5) + ((SMOOTHED(8, -9) < SMOOTHED(15, 0)) << 4) + ((SMOOTHED(0, 22) < SMOOTHED(-4, -15)) << 3) + ((SMOOTHED(-14, -1) < SMOOTHED(3, -2)) << 2) + ((SMOOTHED(-7, -4) < SMOOTHED(17, -7)) << 1) + ((SMOOTHED(-8, -2) < SMOOTHED(9, -4)) << 0));    desc[12] = (uchar)(((SMOOTHED(5, -7) < SMOOTHED(7, 7)) << 7) + ((SMOOTHED(-5, 13) < SMOOTHED(-8, 11)) << 6) + ((SMOOTHED(11, -4) < SMOOTHED(0, 8)) << 5) + ((SMOOTHED(5, -11) < SMOOTHED(-9, -6)) << 4) + ((SMOOTHED(2, -6) < SMOOTHED(3, -20)) << 3) + ((SMOOTHED(-6, 2) < SMOOTHED(6, 10)) << 2) + ((SMOOTHED(-6, -6) < SMOOTHED(-15, 7)) << 1) + ((SMOOTHED(-6, -3) < SMOOTHED(2, 1)) << 0));    desc[13] = (uchar)(((SMOOTHED(11, 0) < SMOOTHED(-3, 2)) << 7) + ((SMOOTHED(7, -12) < SMOOTHED(14, 5)) << 6) + ((SMOOTHED(0, -7) < SMOOTHED(-1, -1)) << 5) + ((SMOOTHED(-16, 0) < SMOOTHED(6, 8)) << 4) + ((SMOOTHED(22, 11) < SMOOTHED(0, -3)) << 3) + ((SMOOTHED(19, 0) < SMOOTHED(5, -17)) << 2) + ((SMOOTHED(-23, -14) < SMOOTHED(-13, -19)) << 1) + ((SMOOTHED(-8, 10) < SMOOTHED(-11, -2)) << 0));    desc[14] = (uchar)(((SMOOTHED(-11, 6) < SMOOTHED(-10, 13)) << 7) + ((SMOOTHED(1, -7) < SMOOTHED(14, 0)) << 6) + ((SMOOTHED(-12, 1) < SMOOTHED(-5, -5)) << 5) + ((SMOOTHED(4, 7) < SMOOTHED(8, -1)) << 4) + ((SMOOTHED(-1, -5) < SMOOTHED(15, 2)) << 3) + ((SMOOTHED(-3, -1) < SMOOTHED(7, -10)) << 2) + ((SMOOTHED(3, -6) < SMOOTHED(10, -18)) << 1) + ((SMOOTHED(-7, -13) < SMOOTHED(-13, 10)) << 0));    desc[15] = (uchar)(((SMOOTHED(1, -1) < SMOOTHED(13, -10)) << 7) + ((SMOOTHED(-19, 14) < SMOOTHED(8, -14)) << 6) + ((SMOOTHED(-4, -13) < SMOOTHED(7, 1)) << 5) + ((SMOOTHED(1, -2) < SMOOTHED(12, -7)) << 4) + ((SMOOTHED(3, -5) < SMOOTHED(1, -5)) << 3) + ((SMOOTHED(-2, -2) < SMOOTHED(8, -10)) << 2) + ((SMOOTHED(2, 14) < SMOOTHED(8, 7)) << 1) + ((SMOOTHED(3, 9) < SMOOTHED(8, 2)) << 0));#undef SMOOTHED

References:

Michael Calonder et. BRIEF: Binary Robust Independent Elementary Features

Http://www.cnblogs.com/ronny/p/4081362.html? Utm_source = tuicool

Copyright Disclaimer: This article is an original article by the blogger and cannot be reproduced without the permission of the blogger.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.