I don't have time to write text. I need to paste a few pictures first: (Google Spreadsheets link, may need to flip the wall)
========================================================
I did not expect that this post was listed first in the number of readings without any text instructions. Can this be understood? I 'd better explain it. We will discuss the point cloud alignment problem under different norms.
ICP is the abbreviation of Iterative Closest Point. Generally, it can also be used as the abbreviation of iterative corresponding point. It is a method for handling Point Cloud Alignment (or point cloud matching, registration, and pointcloud registry) problems. For more information about Point Cloud Alignment and ICP, see the following references:
[1] Paul J. Besl and Neil D. McKay. A method for registration of 3-D shapes. IEEE Transactions on Pattern Analysis and machine intelligence, 14 (2): 239-256,199 2.
[2] Yang Chen and Gé rard medioni. Object modelling by registration of multiple range images. Image vision comput., 10 (3): 145-155,199 2.
[3] Szymon rusinkiewicz and Marc levoy. EF extends cient variants of the ICP algorithm. In Third International Conference on 3D digital imaging and modeling (3dim), 2001.
[4] timoth é e Jost. Fast geometric matching for shape registration. PhD thesis, Universit é de Neuch â Tel, 2002.
[5] Helmut pottmann, Qi-xing Huang, Yong-liang Yang, and Shi-min Hu. geometry and convergence analysis of algorithms for registration of 3D shapes. int. j. computer Vision, 67 (3): 277-296,200 6.
In the last article, we discussed the application of 1 norm in surface fitting and point cloud alignment. I read this article and thought about writing this post.
[6] S. FL öry and M. hofer. surface Fitting and registration of Point Clouds using approximations of the unsigned distance function. computer Aided Geometric Design (CAGD), 27 (1): 60-77,201 0.
These articles can be searched and downloaded on Google without any commercial databases.
In point cloud alignment, there is a deviation between each point and the alignment target. All deviations can constitute a constant, which may be called a deviation vector. The most common idea is to minimize the sum of squares of deviations. For example, to solve the regression problem by using the least square method, this was previously the case for alignment of the point cloud. In Vector norm language, this is the minimum 2-norm of the deviation vector. So what is the significance of other norms? The Optimization Based on the 1-norm has better robustness. For details, refer to the "least absolute deviations" term and document [6] In the Wikipedia. The infinite norm indicates the absolute value of the maximum deviation, which is of great significance, for example, in the evaluation of the shape and space tolerances.
There are two common evaluation methods for the deviation between the point and alignment target: the distance between the point and the corresponding point and the distance between the point and the cut plane of the corresponding point. The latter usually has a faster convergence speed, but requires information on the cut plane. In the case where the distance from point to point is evaluated as deviation, Optimization Based on the 1-norm and the Infinity norm can be converted to the second-order cone programming and SOCP problem. In the case of deviation Evaluation Based on the point-to-point distance, the optimization based on the 1 norm and the infinite norm can be transformed into the linear programming (LP) problem. In [6], there are methods for converting Optimization Based on 1 norm, which can be easily applied to the infinite norm. The reason for this conversion is to convert the unrestricted optimization problem into the constrained optimization problem. I understand this is for smoothness. Originally, the optimization based on the 1-norm and the infinite-norm is non-smooth. After conversion, it becomes a smooth constraint optimization problem. Although the number of variables is increased and the constraints are added, this is worthwhile. It can be seen how important the smoothness is in optimization. In addition, I have implemented two other methods: an approximate solution based on 1-norm optimization is to use the Huber function for 1-norm approximation, then, lbfgs is used to solve this optimization problem. For more general P-norm-based optimization, the irls method can be used for approximate solution.
My development environment is gvim + TDM-mingw. The tool kits are selected as follows. eigen is used for Matrix processing, Ann is used for recent search, and glpk for Windows is used for LP solving. SOCP seems to have only commercial tools. I tried mosek, but the solution provided by SOCP is really strange. Maybe it is not enough of me. After trying several methods, I still used a stupid method: Converting SOCP into SDP (semi-definite programming, semi-definite planning), and writing the relevant data into a sparse sdpa file, call CSDP to solve the problem. The lbfgs Method Used in Optimization Based on the Huber function uses liblbfgs. If it is in MATLAB, the problem will be simpler (inferred, I have not actually verified), and matrix processing does not have to be said. Ann is still available (Ann MATLAB wrapper) in terms of recent search, and I have also used this. In terms of optimization, MATLAB can solve LP itself, while SOCP can use sedumi.
I did a test with a small cloud (about 1400 points), so I couldn't talk about the verification. Three kinds of norms are considered, so there are three convergence graphs. As an indicator of the vertical coordinates, the 1 norm uses the mean absolute error, the 2 norm uses the root mean square error, and the infinite norm uses the maximum absolute error (that is, the infinite norm itself ). The meanings of the methods in the figure are as follows:
It can be divided into two categories. values starting with P represent the deviation Evaluation Method Based on the point-to-point distance, while those starting with pL represent the deviation evaluation method based on the distance from a point to a plane. In the following symbols, 1 represents 1 norm, 2 represents 2 norm, I represents an infinite norm, and 1 h Represents 1 norm that is similar to the Huber function, RW (re-weighted) it indicates that the irls is used to calculate the p-norm optimization. The number followed by RW is the value of P. In addition, there are two variants of the method starting with P: 2 T and rwt. They do not adopt the closed solution method when solving the optimal transformation, it is like an infinitely small transformation used when the distance from the point to the plane is the target (that is, the tangent space is used, so t is used to indicate this type of variation ). In fact, it is better to say that P2 and prw are alternative, because only they have a closed solution method.
Then let's look at the picture and talk. The Optimization Based on the 1-norm is proposed for robustness, but the calculation amount is a big problem. For the sake of robustness, the weighted least square method may be better, because it is faster and more flexible. The use of the Huber function to approximate the 1 norm is precisely to reduce the amount of computing. In the case that the distance from point to point is evaluated as deviation, the experiment shows that this method is successful, and the convergence curves of the two are coincident (the P1 Curve cannot be found in the figure, because it is covered by p1h ). However, when the distance from the point to the plane is evaluated as deviation, the convergence curves of the two are inconsistent, and the convergence of pl1h is poor, and the lbfgs search is almost unsuccessful. I don't know the specific reason, but I guess it's still not smooth enough. The optimization convergence speed based on the infinite norm is too slow. It must have high requirements on responsiveness. When the distance between a vertex and a target is relatively large, we can consider adopting 2-norm for iteration first, and then using a large limit for iteration. Finally, it is interesting to note that in irls-based P-norm optimization, using a P value higher than 2 can often obtain a faster convergence speed than the 2-norm optimization. Prw3 is much faster than P2, while plrw5 is far more advanced than other methods. However, plrw5 is not stable enough. In the end, it fails to converge to 0. The reason why it uses 5 instead of 3 or 4 is that these two types of stability are too bad. The weighted ICP is used, as described in the previous documents, such as [3, 4]. However, the weighted method is generally used to obtain higher weights for better matching points. In irls-based P-norm (P> 2) optimization, the opposite is true. For vertices with poor matching, the distance between points is higher. Experiments show that this can accelerate convergence but reduce stability. I think the principle of this solution can be explained as follows: smaller matching points need better matching, and higher weights can make these points closer to the target surface, so as to improve matching faster; on the other hand, the optimal transformation of the computation using point pairs with poor matching may be "Incorrect", so that the results of the entire algorithm cannot be correctly converged. This shows that there is a balance between speed and stability in the selection of weights, which does not seem to have been mentioned by our predecessors.
Source code