Binocular ranging Two

Source: Internet
Author: User
Tags valid

three, binocular calibration and binocular correction

Binocular camera calibration not only to obtain the internal parameters of each camera, but also through calibration to measure the relative position between the two cameras (that is, the right camera with respect to the left camera's three-dimensional translation T and rotation R parameters).

Figure 6

To calculate the parallax of the target point on the left and right two views, first match the point to the two corresponding image points on the left and right views. However, it is very time-consuming to match the corresponding points in two-dimensional space, in order to reduce the matching search scope, we can use the Polar line constraints to make the matching of the corresponding points fall from the two-dimensional search to one-dimensional search.

Figure 7

And the function of binocular correction is to eliminate the distortion of the two images strictly to the corresponding, so that the two images of the polar line exactly on the same level, so that an image on any point and its corresponding point on the other image must have the same line number, just in the line to a one-dimensional search can match to the corresponding point.

Figure 8

1. About the use of Cvstereocalibrate

If according to learning OpenCV routine, directly through the cvstereocalibrate to achieve binocular calibration, it is easy to produce a larger image distortion, corner of the deformation is more severe. It is best to use CVCALIBRATECAMERA2 () to make a separate calibration for each camera, and then cvstereocalibrate the binocular calibration. In this way, the parameters of the calibration are accurate, and the subsequent correction will not have obvious distortion. The program I am using is based primarily on learning OpenCV routines Ch12_ex12_3.cpp, the main parts of which are as follows:

////////////////////////////////////////////////////////////////////////////Whether the Monocular calibration is first performed. CvCalibrateCamera2 (&_objectpoints, &_imagepoints1, &_npoints, ImageSize, &t_m1, &t_d1, NUL
    L, NULL, CV_CALIB_FIX_K3); CvCalibrateCamera2 (&_objectpoints, &_imagepoints2, &_npoints, ImageSize, &t_m2, &T_D2, NUL
    L, NULL, CV_CALIB_FIX_K3); The Binocular calibration cvstereocalibrate (&_obje Ctpoints, &_imagepoints1, &_imagepoints2, &_npoints, &t_m1, &t_d1, &t_M2, &t_D 2, ImageSize, &t_r, &t_t, &t_e, &t_f, Cvtermcriteria (cv_termcrit_iter+ Cv_termcrit  _eps, 1e-5)); Flags as the default Cv_calib_fix_intrinsic 

Above the T_M1 (2), T_D1 (2) is the Monocular calibration obtained after the left (right) camera's Reference matrix (3*3) and the distortion parameter vector (1*5); T_r, t_t is the right camera relative to the left camera rotation matrix (3*3) and the vector (3*1), T_ E is the essential matrix (3*3), which contains the relative position of two cameras, and T_f is the fundamental matrix (3*3), which contains two camera-relative position relationships and the camera's respective internal information.

Figure 9

2. Cvstereocalibrate is how to calculate Essential matrix and fundamental matrix.

First, we discuss the construction process of Essential matrix and fundamental matrix based on the NO. 422 page of learning OpenCV, and then see how OpenCV is calculated.

Figure 10

Note: The original text of the PL, PR and QL, QR physical meaning and the expression of the calculation formula is wrong, has been amended. (2011-04-12)

(1) Essential Matrix

As shown above, given a target point p, the left camera light heart ol is the origin point. The position of point P relative to the light heart ol is pl, and the position of the observer is PR. The position of the point P on the left camera imaging plane is PL, and the position on the right camera imaging plane is PR. Note that Pl, pr, pl, pr are all in the camera coordinate system, and the dimension is the same as the translation vector T (Pl, pR The corresponding pixel coordinates in the image coordinate system are Ql, qr ).

Assuming that the relative position relationship of the right camera relative to the left camera is represented by the rotation matrix R and T, then it is available: Pr = R (pl-t).

Now we are looking for an expression of the polar plane determined by the point P, ol and OR. Notice that the line of X to point A is perpendicular to the plane normal vector n, that is, the vector (x-a) and vector n dot product is 0: (x-a) • N = 0. In the ol coordinate system, the position of the light center or is T, then p, ol and or determine the polar plane can be represented by the following formula: (PL-T) T. (PLXT) = 0.

by PR = R (PL-T) and rt=r-1 available: (RTPR) T. (PLXT) = 0.

On the other hand, the cross product of a vector can be expressed as the product of a matrix and a vector, and the matrix of the vector T is expressed as s, which is: PLXT = SPl.

Figure 11

Then get: (Pr) TRSPL = 0. In this way, we get essential matrix:e = RS.

Through the matrix e we know that the relationship between PL and Pr satisfies: (pr) tepl = 0. Further, by pl = fl*pl/zl and PR = FR*PR/ZR We can get the point P in the left and right two camera coordinates of the observer Point PL and the PR should satisfy the Polar line constraint relationship as: (pr) tepl = 0.

Notice that E is the rank of discontent, its rank is 2, then (pr) TEPL = 0 is actually a straight line, that is, the polar line.

(2) Fundamental Matrix

Since the matrix E does not contain the camera reference information, and E is oriented to the camera coordinate system. In fact, we are more interested in the image pixel coordinate system to study one pixel point on the other view of the polar line, which requires the camera's internal information to the camera coordinate system and the image pixel coordinate system linked. In (1), PL and PR are physical coordinate values, corresponding pixel coordinate values are QL and QR, the camera reference matrix is M, then there is: p=m-1q. thereby: (pr) TEPL = 0àqrt (Mr-1) TE ml-1ql = 0. Here, we get fundamental matrix:f = (Mr-1) TE Ml-1. And there is QRTFQL = 0.

(3) relative calculation of OpenCV

As can be seen from the above analysis, the key to finding matrices E and F is the calculation of the rotation matrix R and the vector T, and most of the Cvstereocalibrate code (cvcalibration.cpp 第1886-2180 lines) is also calculated and optimized for R and T. The basic method for calculating the initial estimates of R and T is given in the 第1913-1925 line of Cvcalibration.cpp:

    
       /* Compute initial estimate

       of pose for each image, Compute:
          R (OM) are the rotation matrix of om
          om (r) is T He rotation vector of R
          r_ref = r (om_right) * R (om_left) '
          t_ref_list = [T_ref_list; T_right-r_ref * T_left]
          om_ref_list = {om_ref_list; OM (r_ref)]

       om = median (om_ref_list)
       T = median (t_ref_li ST)
    */

The detailed calculation process is complicated and difficult to understand, this is not discussed here, the following is the calculation matrix E and F code:

    if (MatE | | matf)
    {
        double* t = T_LR.data.db;
        Double tx[] =
        {
            0,-t[2], t[1],
            t[2], 0,-t[0],
            -t[1], t[0], 0
        };
        Cvmat tx = Cvmat (3, 3, cv_64f, Tx);
        Double e[9], f[9];
        Cvmat e = Cvmat (3, 3, cv_64f, e);
        Cvmat f = Cvmat (3, 3, cv_64f, f);
        Cvmatmul (&tx, &R_LR, &e);
        if (mate)
            Cvconvert (&e, mate);
        if (MATF)
        {
            double ik[9];
            Cvmat ik = Cvmat (3, 3, cv_64f, IK);
            Cvinvert (&k[1], &ik);
            Cvgemm (&ik, &e, 1, 0, 0, &e, cv_gemm_a_t);
            Cvinvert (&k[0], &ik);
            Cvmatmul (&e, &ik, &f);
            Cvconvertscale (&f, MATF, Fabs (f[8]) > 0? 1./F[8]: 1);
        }
    }

3. The TX symbol is negative for the vector T, which is obtained by binocular scaling.

"@scyscyao: I'm not really sure about that either. Personal explanation is that the binocular calibration of the T vector point is from the right camera pointing to the left camera (that is, TX is negative), and in the OpenCV coordinate system, the origin of the coordinates is the left camera. Therefore, when used as a calibration, the three-component symbol of this vector must be changed, and the final distance will be positive.

Figure 12

But here is another problem, is learning OpenCV in the expression of Q, the fourth row of the third column element is -1/TX, and in practice, the actual value is 1/TX. The result that I discussed with Maxwellsdemon here is that the minus sign in the Q expression in the book is to counteract the inverse direction of the T vector, but in the process of actually writing the OPENCV code, the friend does not add the minus sign in. ”

Figure 13

Scyscyao's analysis makes sense, but I think there is another explanation: as shown above, the external parameter of the camera C1 (C2) relative to the world coordinate system is the rotation matrix R1 (R2) and the vector T1 (T2), if the subscript 1 represents the left camera, 2 represents the right camera, Obviously there are t1x > t2xon the horizontal component of the translation vector, and if the left camera C1 is the coordinate origin, then the rotation matrix R and the vector T, as shown above, can be obtained with TX < 0 because of t1x > t2x.

In order to offset the TX negative, the element in the matrix Q (4,3) should be prefixed with a minus sign, but not added in the Cvstereorectify code, which makes the three-dimensional data computed from the Cvreprojectimageto3d calculated with the actual value of the inverse number.

    if (MATQ)
    {
        double q[] =
        {
            1, 0, 0,-cc_new[0].x,
            0, 1, 0,-cc_new[0].y,
            0, 0, 0, fc_new,
            0, 0, 1./_t[idx],
            (idx = = 0 cc_new[0].x-cc_new[1].x:cc_new[0].y-cc_new[1].y)/_t[idx]
        };
        Cvmat q = Cvmat (4, 4, cv_64f, q);
        Cvconvert (&q, MATQ);
    }

To avoid the inverse of the above, you can add the following code to change the value of q[3][2] after the Q matrix has been computed.

Q is the Mat type matrix, OpenCV2.1 C + + mode
    q.at<double> (3, 2) =-q.at<double> (3, 2);    
Q is a double when the array is defined as
    double q[4][4];
    Cvmat t_q = Cvmat (4, 4, cv_64f, Q);
    Cvstereorectify (...);
    Q[3][2]=-Q[3][2];

4. The principle of binocular correction and the application of cvstereorectify.

Figure 14

As shown in Figure 14, the binocular school is based on the camera calibration obtained by the Monocular control data (focal length, imaging origin, distortion coefficient) and the relative position of the binocular (rotation matrix and shift vector), respectively, the left and right view to eliminate distortion and line alignment, so that the left view of the imaging origin coordinates consistent (cv_calib_ The zero_disparity flag bit is set up, the two cameras are parallel, the imaging plane coplanar, and the polar line alignment. Before the OpenCV2.1 version, Cvstereorectify's main work is to complete the above operation, the corrected display effect as shown in figure (c). Can see the correction after the left and right view of the corner area is irregular, and the subsequent binocular matching to take the disparity will have an impact, because these corner areas are also involved in the matching operation, the corresponding parallax value is useless, and the general value is relatively large, in three-dimensional reconstruction and robot obstacle avoidance navigation and other applications will have adverse effects.

Therefore, the OpenCV2.1 version of the cvstereorectify new 4 parameters to adjust the binocular correction image after the display effect, respectively, double alpha, cvsize newimgsize, cvrect* roi1, cvrect* roi2. The following figure 15-17 provides a brief introduction to the effects of these 4 parameters:

(1) Newimgsize: The resolution of the remap image after correction. If the input is (0,0), it is the same size as the original image. For the image distortion coefficient is relatively large, you can set the newimgsize larger, in order to preserve the image details.

(2) Alpha: The image clipping factor, the value range is-1, 0~1. When the value is 0 , OPENCV will zoom and pan the corrected image so that the remap image shows only valid pixels (that is, remove irregular corner areas), as shown in Figure 17, suitable for robot obstacle avoidance navigation and other applications ; When alpha is 1 o'clock, The remap image will display all the pixels contained in the original image, which is suitable for high-end cameras with very few distortion coefficients, and when the alpha value is between 0-1, the OpenCV retains the corner area pixels of the original image at the corresponding scale. When Alpha is 1, the OPENCV is automatically scaled and pan, showing the effect as shown in Figure 16.

(3) Roi1, Roi2: Used to mark a rectangular area of a remap image that contains valid pixels. The corresponding code is as follows:

02433     if (roi1)
02434     {
02435         *roi1 = Cv::rect (Cvceil ((INNER1.X-CX1_0) *s + cx1),
02436                      Cvceil ((INNER1.Y-CY1_0) *s + cy1),
02437                      Cvfloor (inner1.width*s), Cvfloor (inner1.height*s))
02438             & Cv::rect (0, 0, newimgsize.width, newimgsize.height);
02439     }
02440     
02441     if (roi2)
02442     {
02443         *roi2 = Cv::rect (Cvceil ( INNER2.X-CX2_0) *s + cx2),
02444                      Cvceil ((inner2.y-cy2_0) *s + cy2),
02445                      Cvfloor (inner2.width*s ), Cvfloor (inner2.height*s))
02446             & cv::rect (0, 0, newimgsize.width, newimgsize.height);
02447     }

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.