First, the background significance
Writing this blog should be so far I have seen many areas of the classic paper algorithm involves moving least squares (MLS). See this algorithm is very important, first look at its related classic applications:
1, image deformation. In the field of image processing paper: Image deformation using moving least squares using the principle of moving least squares to realize the correlation deformation, and this paper reference rate is very high, can be said to be a classical image deformation algorithm, Siggraph above the paper.
Realization of image distortion by moving least squares
2, point cloud Filter. Using MLS to realize point cloud filtering is a big application in the field of three-dimensional image learning point cloud processing, I know point cloud filtering classical algorithms include: bilateral filtering, MLS, Wlop.
3, Mesh deformation. It is also very good to use this algorithm to realize the deformation application of triangular mesh model, related paper "3D deformation using moving least squares"
OK, then I take the image deformation using moving least squares algorithm as an example to explain the implementation of the algorithm based on moving least squares.
Second, the implementation of the algorithm
Here I have no intention to the algorithm principle of the derivation process, directly to the implementation of the algorithm step formula.
According to the different transformation matrices, this paper can be divided into three kinds of deformation methods, namely affine transformation, similarity transformation and rigid transformation. Where the effect of the rigid transformation is the best, my side from the simple speaking, only the affine transformation of the algorithm to achieve:
Problem: Each control vertex coordinates p of the original image, the coordinates of the pixel point V on the original image. The position of the control vertex of the deformed image is Q, and the corresponding position F (v) is obtained in the image after deformation.
The total calculation formula is:
The above LV (x) and F (v) are the same function. Because x is the pixel coordinates V of the original image we entered.
So our goal is to know p*,q*, transform matrix M. So enter an argument x and we can calculate its position in the transformed image.
OK, as long as we know the calculation method of each parameter in the formula above, we can calculate the corresponding coordinate point F (v) of the deformed image.
1, the weight of the W calculation method is:
That is to calculate the distance between V to control vertex Pi as the weight, parameter A is generally a value of 1.
This step implements the code as follows:
[CPP] View plain copy//calculates the weight of each control vertex, that is, the distance from the computed point T to each vertex 1/SQR (d) while (Iter!=p.end ()) {double temp; if (Iter->x!=t.x | | iter->y!=t.y) temp=1/((iter->x-t.x) * (iter->x-t.x) + (ITER->Y-T.Y) * (iter->y-t. y)); else//If T is the control vertex, the weight of the control vertex needs to be set to infinity Temp=maxnum; W.push_back (temp); iter++; 2, the q*,p* formula is as follows:
That is to calculate the weighted summation center position of the control vertex pi and Qi.
[CPP] View plain copy double px=0,py=0,qx=0,qy=0,tw=0; while (Iterw!=w.end ()) {px+= (*ITERW) * (iter->x);//The weighted position of all control vertex p py+= (*ITERW) * (iter->y); qx+= (*ITERW) * (iterq->x);//weighted position of all control vertex Q qy+= (*iterw) * (iterq->y); tw+=*iterw;//total weight iter++; iterw++; iterq++; } PC.X=PX/TW; PC.Y=PY/TW; QC.X=QX/TW; QC.Y=QY/TW; 3, affine transformation matrix M formula is as follows:
As long as the relevant parameters are brought in can be calculated.
Finally paste some complete MLS source code:
[cpp] View plain copy//input the T-point of the original image, output the mapping point F (v) mypoint cmlsdlg::mls (const mypoint& t) { if (P.empty ())//The control vertex p of the original image, in the same image coordinate system as the input point T return t; mypoint fv; double A[2][2],B[2][2],M[2][2]; iter=p.begin (); w.erase (W.begin (), W.end ()); //calculates the weights of each control vertex, that is, the distance from the computed point T to each vertex 1/SQR (d) while (iter!= P.end ()) { double temp; if (iter->x!=t.x | | &NBSP;ITER->Y!=T.Y) temp =1/((iter->x-T.x) * (iter->x-t.x) + (ITER->Y-T.Y) * (ITER->Y-T.Y)); else//If T is the control vertex, then the weight of the control vertex needs to be set to infinity temp=MAXNUM; w.push_back ( Temp); iter++; } vector<double>::iterator iterw=w.begin (); vector<mypoint>::iterator iterq=q.begin ();//q is the position of the control point of the target image, our goal is to find the corresponding position of T in Q iter=p.begin (); MyPoint pc,qc; double px=0,py=0,qx=0,qy=0,tw=0; while ( Iterw!=w.end ()) { px+= (*ITERW) * (iter- >X);//weighted bit of all control vertex p py+= (*ITERW) * (iter->y); qx+= (*iterw ) * (ITERQ->X)///All control vertex Q weighted position qy+= (*ITERW) * (iterq->y); tw+=*iterw;//total weight iter++; iterw++; iterq++; } pc.x=px/tw; pc.y=py/tw; qc.x=qx/tw; qc.y=qy/tw; iter=p.begin (); iterw=w.begin (); Iterq=q.begin (); for (int i=0;i<2;i++) for (int j=0;j<2;j++) { a[i ][j]=0; B[i][j]=0; M[i][j]=0; } while (Iter!=p.end ()) { double P[2]={iter->x-pc.x,iter->y-pc.y}; double PT[2][1]; pt[0][0]=iter- >x-pc.x; PT[1][0]=iter->y-pc.y; double q[2]={iterq->x-qc.x,iterq->y-qc.y}; double T[2][2]; T[0][0]=PT[0][0]*P[0]; T[0][1]=PT[0][0]*P[1]; T[1][0]=PT[1][0]*P[0]; T[1][1]=PT[1][0]*P[1]; for (int i=0 ; i<2;i++) &Nbsp; for (int j=0;j<2;j++) { a[i][j]+= (*ITERW) *t[i][j]; } T[0][0]=PT[0][0]*Q[0]; T[0][1]=PT[0][0]*Q[1]; T[1][0]=PT[1][0]*Q[0]; T[1][1]=PT[1][0]*Q[1]; for (int i=0;i<2;i++) for (int j=0;j<2;j++) { b[i][j]+= (*ITERW) *t[i][j]; } iter++; iterw++; iterq++; } //cvinvert (a,m); double det=a[0][0]*a[1][1]-a[0][1]*a[1][0]; if (Det <0.0000001) { fv.x=t.x+qc.x-pc.x; fv.y=t.y+qc.y-pc.y; return fv; } double temp1,temp2,temp3,temp4; temp1=a[1][1]/det; temp2=-A[0][1]/det; temp3=-a[1][0]/det; temp4=A[0][0]/det; a [0] [0]=temp1; A[0][1]=temp2; A[1][0]=temp3; A[1][1]=temp4; m[0][0]=a[0][0]*b[0][0]+a[0][1]*b[1][0]; m[0][1]=a[0][0]*b[0][1]+a[0][1]*b[1] [1]; M[1][0]=A[1][0]*B[0][0]+A[1][1]*B[1][0]; M[1][1]=A[1][0]*B[0][1]+A[1][1]*B[1][1]; double v[2] ={t.x-pc.x,t.y-pc.y}; double R[2][1]; &NBSP;&NBSP;&NBSP;&NBSP;R[0][0]=V[0]*M[0][0]+V[1]*M[1][0];//LV (x) Total calculation formula R[1][0]=V[0]*M[0][1]+V[1]*M[1][1]; fv.x=R[0][0]+qc.x; fv.y=R[1][0]+qc.y; return fv; }
Invoke Method Example:
[cpp] View plain copy int i=0,j=0; dimage=cvcreateimage cvsize (2* Pimage->width,2*pimage->height), Pimage->depth,pimage->nchannels//Create a new deformable image Cvset ( Dimage,cvscalar (0)); mypoint orig=mls (MyPoint (ir_x,ir_y)); int Orig_x= (int) ( orig.x)-(int) (PIMAGE->WIDTH/2); int orig_y= (int) (ORIG.Y)-(int) (PIMAGE->HEIGHT/2); for (i=0;i<pimage->height;i++)//traverse the original image of each pixel { for (j=0;j<pimage->width;j++) { CvScalar color; double x=j+ir_x; double y=i+IR_Y; &NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;MYPOINT&NBSP;T=MLS (MyPoint (x,y));//mls compute the original image (X,y) The mapping position of the target image F (v) int m= (int) (t.x); int n= (int) (T.Y); m-=Orig_x; n-=Orig_y; &NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;COLOR=CVGET2D (PIMAGE,I,J);//pixel capture if (0<=m && dimage->width>m && 0<=n && dimage->height>n) { cvset2d (Dimage,n,m,color); } } } image morphing algorithm, there are forward mapping and reverse mapping, if according to each pixel, through the above calculation of the corresponding transformation of the pixel position, then in fact, the calculation is very large, because the pixel of an image is too much, if each pixel point, All use the above function to traverse over again, that calculate quantity can imagine.
Therefore, the general deformation algorithm is to treat the image triangulation:
Then, according to the vertices of the triangular mesh model only, according to the deformation algorithm, the new position of each vertex of the triangular mesh model is calculated, and then the Triangle affine transformation method is used to compute the value of each pixel in the triangle and get the distorted image, so it is not only faster, Colleagues have solved the shortcomings of the forward mapping and reverse mapping algorithm, and the defects of the forward and reverse mappings of the specific image distortion, and can view the relevant literatures.
Two other similar transformations and rigid transformations, you can view the M-matrix calculation formula, write implementation code.
This article address: http://blog.csdn.net/hjimce/article/details/46550001 Author: hjimce Contact qq:1393852684 More resources please pay attention to my blog: http://blog.csdn NET/HJIMCE original article, All rights reserved, reprint please keep the bank information.
Reference documents:
1, "Image deformation Using Moving least squares"
2, "3D deformation Using Moving least squares"
from:http://blog.csdn.net/hjimce/article/details/46550001