Get rotation and translation from the homography matrix (from homography to rotation and translation)

Source: Internet
Author: User

Recently, I am working on a robot navigation project, so I just went through opencv to find rotation and translation from homography, which I have read but do not quite understand.

The Code is as follows:

 

Int calcrtfromhomo (cvmat * H, double T [3], double rodrot [3]) {</P> <p> double R [9]; <br/> cvmat _ r = cvmat (3, 3, cv_64f, R); // rotation matrix <br/> double intrinsic [9] = {1, 0, 0, 0, 1, 0, 0, 0, 1 }; <br/> cvmat _ m = cvmat (3, 3, cv_64f, intrinsic); // intrinsic matrix, of no use in this implementation, reserved for future use <br/> double ones [] = {1, 1}; <br/> cvmat _ ones = cvmat (3, 1, cv_64f, ones); <br/> // double rodrot [3]; <br/> cvmat _ rodrot = cvmat (3, 1, cv_64f, rodrot ); <br/> // For SVD <br/> cvmat * u = cvcreatemat (3, 3, cv_64f); <br/> cvmat * w = cvcreatemat (3, 3, cv_64f); <br/> cvmat * V = cvcreatemat (3, 3, cv_64f); <br/> cvmat * invm = cvcreatemat (3, 3, cv_64f ); <br/> // three columns of homography matrix <br/> cvmat * H1 = cvcreatemat (3, 1, cv_64f); <br/> cvmat * H2 = cvcreatemat (3, 1, cv_64f); <br/> cvmat * h3 = cvcreatemat (3, 1, cv_64f ); <br/> // three columns of rotation matrix <br/> cvmat * R1 = cvcreatemat (3, 1, cv_64f); <br/> cvmat * r2 = cvcreatemat (3, 1, cv_64f); <br/> cvmat * R3 = cvcreatemat (3, 1, cv_64f ); <br/> // translation vector <br/> cvmat _ t = cvmat (3, 1, cv_64f, T); <br/> cvgetcol (H, H1, 0 ); <br/> cvgetcol (H, H2, 1); <br/> cvgetcol (H, H3, 2); <br/> cvgetcol (& _ r, R1, 0); <br/> cvgetcol (& _ r, R2, 1); <br/> cvgetcol (& _ r, R3, 2 ); </P> <p> cvinvert (& _ m, invm); <br/> cvmatmul (invm, H1, R1); <br/> cvmatmul (invm, H2, r2); <br/> cvmatmul (invm, H3, & _ T); </P> <p> cvnormalize (R1, R1); <br/> cvmul (R2, & _ ones, R2, 1/cvnorm (R1); <br/> cvmul (& _ T, & _ ones, & _ T, 1/cvnorm (R1 )); <br/> cvcrossproduct (R1, R2, R3); <br/> cvsvd (& _ R, W, U, V, cv_svd_v_t); <br/> cvmatmul (u, v, & _ r); <br/> cvrodrigues2 (& _ r, & _ rodrot, null); <br/> return 1; <br/>}

 

For the code principle, refer to [1]. A small problem is encountered during implementation, that is, when the three columns of the rotation matrix are normalized, they are all divided by the norm of the first column matrix, at that time, I did not find out when I implemented it. I misused the norm of the three columns and found no errors. Later I solved the problem by referring to the [2] code.

 

One output of the function is the translation vector, and the other is the vector after Rodrigues transformation. The vector direction represents the rotation axis, and the module represents the rotation angle. For specific mathematical representation, see [3].

 

The input homography matrix in the function can be obtained using cvfindhomography. For how to obtain homography from the feature points, refer to the following code of the find_obj routine in opencv2.0:

 

If (! Cvfindhomography (& _ pt1, & _ pt2, & _ H, cv_ransac, 5) <br/> return 0;

 

The following is an application of the above function, which is also in progress: (sigh, csdn blog does not support the embedded video function... It's too bad. I really want to find another house ..)

You can click: http://www.youtube.com/watch of small? V = yjrtfo0ftdq

The youku video in the wall is being uploaded... Uploading is slow...

 

 

Video is the result of a video processed by a mobile robot using the bumblebee2 stereo camera under the control of the remote control.

The iRobot platform is shown below. low-angle shooting is awesome. There are many sensors on it. I only use bumblebee2:

 

 

 

Currently, there are two problems:

 

1. because each frame has to be re-computed on the surf point, the calculated homography matrix is not necessarily accurate. If it is displayed on the video, the ing jitter occurs (see the Mapping frame of the video ), in addition, this jitter is especially obvious when the number of feature points is relatively small (it can be seen that when the robot moves from the room to the corridor, the map frame is greatly jitters due to the significant decrease in the corridor's surf points ). Optical Flow can solve this jitter. However, it should not be a very big problem in the current application, and I personally think that this continuity needs to be solved at the higher level of scene understanding, rather than just focusing on the details here.

 

 

2. another problem is about the rotation vector, 50 consecutive scenes can be extracted from this video (currently, the matching surf points for different scenes in the program are less than 1/8 of the whole scenario's surf points, this parameter is adjustable). For details about the rotation and Translation between different scenarios, see:

 

It can be seen that the switching between scenarios is basically translation (the Z of the T vector is 1, So I omitted it), and the rotation volume is very small (the maximum is only about 0.2 radian ), this is consistent with the movements of robots in scenarios.

However, we can see the rotation vector. The calculated value seems to show that all the rotations are carried around the third volume, that is, the Z axis. In the actual process, the rotation seems to be mainly centered around the Y axis.

Of course, based on this result, we can roughly draw a roadmap for the robot. The principle behind the roadmap seems to be a little bit difficult. I feel this is caused by my lack of full understanding of affine space and othodogical space. Ask someone else to answer this question.

 

**************************************** ***************************

 

I just asked the teacher... A complete error... The three-dimensional scene is not a plane, and the center points cannot be mapped using homography. The fundamental matrix is required for this place. This is also the cause of homography jitter and the error in finding the rotation translation.

 

The foundation is not strong, and the ground is shaking. The right of this article is the opposite teaching material.

 

**************************************** ****************************

Supplement 20110525:

It has been verified that using ransac's loose limit (for example, the allowable error is less than 10 pixels) to calculate homography can roughly filter outlier matching pairs, which is better than using ransac for findfundmental. Because homography corresponds to vertices in two views, and fundamental corresponds to vertices in the two views.

 

_________________________________________________________________

 

Reference:

[1]. Chapter 11, learning opencv: P384-P391

Https://gist.github.com/740979.

Http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html#rodrigues2.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.