Key detection is often required and feature extraction together, the key point of the detection of an important property is rotational invariance, that is, the object can be rotated to detect the corresponding key points. But honestly, I think this requirement is a bit of a chicken's eye for robot vision. Because the three-dimensional point cloud that the robot collects is not a complete object, no camera has the perspective function. The point cloud that the robot collects is also just a thin layer of skin. The so-called characteristic points are often in the area of the curved surface, so it is difficult to extract the same key points from different angles. Imagine a person's face, the tip of the nose can be a key point, but the side of the time? There will be a part of the face in the shadow, and the model may be completely different from before.
In other words, these critical point detection algorithms are useful for objects that are farther away in the scene, that is, when the effects of rotating objects are weakened by distance. Once the distance is near, rotation often results in capturing only the side of the model, and the critical point detection algorithm may fail.
1. Rotation-invariant feature detection
The entire ISS algorithm is intrinsic Shape Signatures, the first word is called internal, the word is very fastidious. Said inside, that must have a scope, specifically what the scope of the thing is tentative. If you want to describe a local feature around a point, and the object may move under global coordinates, a good way to do this is to create a local coordinate around the point. Just make sure the local coordinate system rotates with the object as well.
Method 1: Based on the covariance matrix
The idea of a covariance matrix is actually very simple, in fact it's a kind of coupling that's coupled two steps together
1. Subtract the pi from the coordinates of the surrounding PJ: essentially this generates a lot of vectors from PI->PJ, ideally the Pi normals should be perpendicular to these vectors.
2. Using singular value decomposition to find the 0 space of these vectors, to fit a vector as perpendicular as possible, as the estimate of normal
What is the nature of the covariance matrix? is one of the steps in singular value decomposition .... Singular value decomposition is the need to multiply the matrix by its own transpose to obtain a symmetric matrix.
Of course, the benefit of covariance calculation is that you can attach different weights to points at different distances.
Method 2: Based on homogeneous coordinates
1. Convert point coordinates to homogeneous coordinates
2. Singular value decomposition for the second coordinate
3. The vector corresponding to the minimum singular value is the equation of the fitted plane.
4. The coefficient of the equation is the direction of the normal.
Obviously, this method is more simple and rough, eliminating the concept of weight, but in exchange for the speed of operation, do not need to do subtraction. In fact, there is no need to do subtraction, do a point between the vector of the search table is good ...
But I'm going to declare that the PCL implementation is using iterative subtraction.
Regardless of which method is used, there are three mutually perpendicular vectors, one is the normal direction, and the other two directions form a local coordinate system at a point. In this local coordinate system modeling, you can achieve the point cloud feature rotation is not the same purpose.
The idea of the ISS feature point detection is also simple:
1. Building a model using Method 1
2. It uses the relationship between eigenvalues to describe the characteristic degree of the point.
Obviously, the eigenvalues in this case are geometrically meaningful, and the eigenvalues are actually the lengths of the ellipsoid axes. The shape of the ellipsoid is an abstract summary of the distribution state of neighboring points. Imagine that if the adjacent point is dense in a certain direction, the direction will be the first main direction of the ellipsoid, the sparse direction is the second main direction, the normal direction is of course extremely sparse (only one layer), then as the third main direction.
If a point happens to be in a corner, the first main eigenvalue, the second main feature value, and the third main feature value will not be too large.
If the point cloud is dense along a certain direction, the vertical direction factor may be the boundary.
In a word, the method of modeling and analyzing the local coordinate system is the feature point extraction based on eigenvalue analysis.
Finally,Intrisic refers to the interior of the ellipsoid.
2.PCL implementations
PCL::P OINTCLOUD<PCL::P ointxyzrgba>::P TR model (NewPCL::P OINTCLOUD<PCL::P ointxyzrgba> ());; PCL::P Ointcloud<PCL::P ointxyzrgba>::P tr model_keypoints (NewPCL::P OINTCLOUD<PCL::P ointxyzrgba>());p Cl::search::kdtree<PCL::P ointxyzrgba>::P tr Tree (NewPCL::SEARCH::KDTREE<PCL::P ointxyzrgba> ());//Fill in the model cloudDoublemodel_resolution;//Compute model_resolutionPCL::ISSKEYPOINT3D<PCL::P Ointxyzrgba, PCL::P ointxyzrgba>Iss_detector;iss_detector.setsearchmethod (tree); Iss_detector.setsalientradius (6*model_resolution); Iss_detector.setnonmaxradius (4*model_resolution); Iss_detector.setthreshold21 (0.975); Iss_detector.setthreshold32 (0.975); Iss_detector.setminneighbors (5); Iss_detector.setnumberofthreads (4); Iss_detector.setinputcloud (model); Iss_detector.compute (*model_keypoints);
pcl-Low-level vision-critical point detection (ISS)