1 reviews
The Point cloud Registration (registration) is a problem that is not related to the multi-point cloud processing task. It is a key task in the point cloud registration process How to use proper algorithm to make relative better registration for specific point cloud data.
The registration result is usually expressed as a matrix representing the scale, rotation, and translation of the rigid body transformation, which represents the location relationship of two point clouds, and the one point cloud (source Point cloud, sources) that is applied to this rigid body transform can be used to match the other point cloud (target Point Cloud).
Figure 1 destination point Cloud (target)
Figure 2 Source Point cloud
Figure 3 Registration Results (the result is implemented by the algorithm described in this article)
The registration algorithm is divided into two categories, a class of iterative method, using the method of stepwise approximation to gradually optimize the registration results, often used in two point clouds where the location of a small difference in the circumstances of the exact registration; A class of points that are mapped in the two point clouds (the "Same" in real space, respectively) by determining the two point cloud Points) and determine the position relationship of two point clouds based on these points, often used for coarse registration with a large difference in the original location. The registration algorithm based on Gray feature in this paper belongs to the second class.
Some criteria for the registration of point clouds are mainly for three-dimensional shape characteristics (example 2). For point clouds that lack three-dimensional shape features, this type of registration usually does not achieve the desired effect. However, based on the rgb-d sensor can obtain the characteristics of two-dimensional color lattice picture, the two-dimensional feature to determine the point cloud location relationship is a way to solve this problem. This is also the main principle of the registration algorithm based on the gray-level feature in this paper.
2 Source code
To describe the convenience, we use the function
void Getgrayscaleregmatrix (constconst Cloud &cloud_source, Matrix &matrix);
Represents the entry for the algorithm execution, and the registration results are output in the matrix. Where cloud/cloudptr is the boost pointer for Point cloud and point cloud classes in point Cloud Library PCL, point is a PCL dot class and Matrix is the type of matrices (common eigen::matrix4f).
P.S. The function uses void as a function return because the Eigen linear Algebra class Library strongly recommends that the Pass-by-value pass-through method not be used. Not only is this method not recommended in the context of C + + itself, but using Pass-by-value can cause fatal runtime errors due to the compiler's data type alignment characteristics. Here, the code that calls the function should ensure that the stored result matrix has been allocated memory beforehand. (Reference: http://eigen.tuxfamily.org/dox-devel/TopicUnalignedArrayAssert.html)
//Surf_threshold are constants that limit the extraction of feature thresholds, and ax_remove_threshold are constants that limit the maximum distance threshold. See the code below. Const DoubleMax_remove_threshold = -;Const intSurf_threshold = -;//gets the two-dimensional bitmap information corresponding to the point cloud. iplimage* Getiplimage (ConstCloudptr &Input_cloud) { //Create an image. Iplimage *image = Cvcreateimage (Cvsize (*input_cloud). Width, (*input_cloud). Height), ipl_depth_8u,3); //fills the image. Cv::scalar Pixel; for(inti =0; I < (*image). Height; i++) { for(intj =0; J < (*image). width; J + +) {pixel.val[0] = (*input_cloud). points[i* (*image). Width +j].b; pixel.val[1] = (*input_cloud). points[i* (*image). Width +J].G; pixel.val[2] = (*input_cloud). points[i* (*image). Width +J].R; CVSET2D (Image, I, j, Pixel); } } //returns the image created. returnimage;}//The registration main function. voidGetgrayscaleregmatrix (ConstCloudptr &cloud_target,ConstCloudptr &cloud_source, Matrix &matrix) { //gets the two-dimensional bitmap information corresponding to the original point cloud. Refer to the Getiplimage function. Iplimage *oriimagetarget =getiplimage (Cloud_target); Iplimage*oriimagesource =getiplimage (Cloud_source); //the original image is converted to grayscale image, and the SURF feature points are extracted and used. Iplimage *grayimagetarget = Cvcreateimage (Cvgetsize (Oriimagetarget), ipl_depth_8u,1); Cvcvtcolor (Oriimagetarget, Grayimagetarget, Cv_rgb2gray); Iplimage*grayimagesource = Cvcreateimage (Cvgetsize (Oriimagesource), ipl_depth_8u,1); Cvcvtcolor (Oriimagesource, Grayimagesource, Cv_rgb2gray); //extracts SURF feature points and deposits keypointstarget and keypointssource vectors. Surf_threshold are constants that limit the extraction of feature thresholds. Cv::surffeaturedetector Detector (surf_threshold); Std::vector<cv::KeyPoint>Keypointstarget; Detector.detect (Grayimagetarget, keypointstarget); Std::vector<cv::KeyPoint>Keypointssource; Detector.detect (Grayimagesource, Keypointssource); //computes the descriptor for the SURF feature point. cv::surfdescriptorextractor Extractor; Cv::mat Descriptorssource, Descriptorstarget; Extractor.compute (Grayimagetarget, Keypointstarget, Descriptorstarget); Extractor.compute (Grayimagesource, Keypointssource, Descriptorssource); //The descriptor of the two point cloud is mapped by the Flann algorithm for the relationship of the feature points. Cv::flannbasedmatcher Matcher; Std::vector<CV::D match>matches; Matcher.match (Descriptorssource, descriptorstarget, matches); //get a list of correspondence relationships. STD::VECTOR<CV::P oint2f>sourcepoints; Std::vector<CV::P oint2f>targetpoints; for(inti =0; I < matches.size (); i++) {sourcepoints.push_back (keypointssource[matches[i].queryidx].pt); Targetpoints.push_back (keypointstarget[matches[i].trainidx].pt); } //the error correspondence is removed using the RANSAC algorithm. The retention and discard results of feature point pairs are stored in m_ransacstatus. Std::vector<uchar>M_ransacstatus; Cv::findhomography (sourcepoints, targetpoints, Cv_ransac,3.0, M_ransacstatus); //Create a list of relationships that the PCL applies to. pcl::correspondences correspondences; Correspondences.empty (); intTemp_x_target, Temp_y_target, Temp_index_target, Temp_x_source, Temp_y_source, Temp_index_source; for(inti =0; I < matches.size (); i++) { if(M_ransacstatus[i] = =1)//The corresponding pair of points reserved. { //rounding calculates the x, y coordinates, and calculates where it is stored in the point cloud. Temp_x_target = (int) (targetpoints[i].x +0.5); Temp_y_target= (int) (Targetpoints[i].y +0.5); Temp_index_target= Temp_y_target * (*grayimagetarget). Width +Temp_x_target; Temp_x_source= (int) (sourcepoints[i].x +0.5); Temp_y_source= (int) (Sourcepoints[i].y +0.5); Temp_index_source= Temp_y_source * (*grayimagesource). Width +Temp_x_source; //if there is no corresponding spatial point in the three-dimensional point cloud (no depth information), the corresponding relationship is not considered. if((*cloud_source). points[temp_index_source].z! = (*cloud_source). points[temp_index_source].z)Continue; if((*cloud_target). points[temp_index_target].z! = (*cloud_target). points[temp_index_target].z)Continue; //stores the corresponding relationship. pcl::correspondence correspondence; Correspondence.index_query=Temp_index_source; Correspondence.index_match=Temp_index_target; Correspondences.push_back (correspondence); } } //using the spatial characteristics of the point cloud, the relationship is removed if the corresponding relationship is too far apart. Max_remove_threshold is a constant that limits the maximum distance threshold. for(inti =0;;) { if(i = =correspondences.size ()) Break; if(Utils::getdistance (*cloud_target). Points[correspondences[i].index_match], (*cloud_source). points[ Correspondences[i].index_query]) >max_remove_threshold) {Correspondences.erase (Correspondences.begin ()+i); } ElseI++; } //The transformation matrix is computed based on the corresponding relationship. Pcl::registration::transformationestimationlm<point, point>estimation; Estimation.estimaterigidtransformation ((*cloud_source), (*cloud_target), correspondences, matrix);}
3 Code Analysis
The code is accompanied by a comment on the algorithm implementation process. The specific analysis of the code will be put up in the future.
4 algorithm to improve the point
For the determination of the feature points and corresponding relations of gray SURF, the selection of feature points and the determination of the relationship can be further optimized to avoid the appearance of the registration results of severe errors.
From the engineering point of view, the above code is not necessary to deal with the extreme results that may occur under special conditions.
5 References
OpenCV Feature Description http://docs.opencv.org/doc/tutorials/features2d/feature_description/feature_description.html
OPENCV Study Notes (ix)--2 features Feature2d http://blog.csdn.net/yang_xian521/article/details/6901762
Flaris All rights reserved
Reprint please indicate the source ('? ') )
20141019 V1.0 Version
The development of three-dimensional reconstruction of PrimeSense (2)-a registration algorithm based on Gray feature