Opencv for iOS Study Notes (6)-mark detection 3

Source: Internet
Author: User

Original address: opencv for iOS Study Notes (6)-mark detection 3

Precise marking position

// Adjust the posture of the tag Based on the camera rotation. // marker: the captured tag STD: Rotate (marker. points. begin (), marker. points. begin () + 4-nrotations, marker. points. end ());


After the tags are captured and filtered Based on the tag encoding, we should redefine their corners. This step helps estimate the 3D form of the tag below.

STD: vector <CV: point2f> precisecorners (4 * goodmarkers. size (); // locate all the marked corner points for (size_t I = 0; I <goodmarkers. size (); I ++) {const marker & marker = goodmarkers [I]; for (INT C = 0; C <4; C ++) {precisecorners [I * 4 + C] = marker. points [c] ;}/// type/* cv_termcrit_iter uses the maximum number of iterations as the termination condition cv_termcrit_eps uses the accuracy as the iteration condition cv_termcrit_iter + cv_termcrit_eps uses the maximum number of iterations or precision, the condition determines which of the following conditions meets the maximum number of iterations * // specific threshold CV: termcriteria = CV: termcriteria (CV: termcriteria :: max_iter | CV: termcriteria: EPS, 30, 0.01); // input image // input corner, it also serves as a method for outputting a more precise corner point // near size (neighborhood size) // aperture parameter for the Sobel () operator // pixel iteration (expansion). CV :: cornersubpix (grayscale, precisecorners, cvsize (5, 5), cvsize (-1,-1), termcriteria); // Save the latest vertex for (size_t I = 0; I <goodmarkers. size (); I ++) {marker & marker = goodmarkers [I]; for (INT C = 0; C <4; C ++) {marker. points [c] = precisecorners [I * 4 + C];}

The image we get should look like this:

Note that the cornersubpix function is not used in the early stages of tag detection because of its complexity. Calling this function to process a large number of vertices takes a lot of processing time, therefore, we only use valid tags.

3D form of the Profile

In general, augmented reality technology perfectly integrates the real world with virtual objects. To present an object to a 3-dimensional space, we must know its posture relative to the camera we are capturing frames. Therefore, we will use Euclidean transformations in the Cartesian coordinate system to represent this pose.

The position marked in a three-dimensional space is associated with its projection matrix as follows:

P = A * [R | T] * m;

Where:

M indicates a three-dimensional vertex.

[R | T] indicates a [3 | 4] Euclidean matrix.

A Indicates a camera matrix or inherent matrix parameters.

P: projection of m in screen space

After we complete the mark detection, and obtain the Four Corner Points (screen space projection) of the two-dimensional space ), in the next step, we need to obtain the matrix and M vector parameters and calculate the Euclidean matrix transformation.

Camera Calibration and 3D reconstruction

Camera Calibration

Each camera has unique parameters, such as focal length, Master point, and lens distortion model.

The process of finding the camera's internal parameters is camera calibration. Camera Calibration describes the perspective transformation and lens distortion on the output image, so it is critical for augmented reality applications. In order to achieve the best user experience, objects in augmented reality should also use the same perspective projection.

To calibrate the camera, we need a special pattern (a checker or a black circle on a white background ). The following is a good algorithm for camera calibration:

Camera calibration using a chess box

To demonstrate camera calibration, we create the cameracalibration class:

/*** A camera calibration class that stores internal camera parameters and distortion Vectors */class cameracalibration {public: cameracalibration (); cameracalibration (float FX, float fy, float CX, float CY); cameracalibration (float FX, float fy, float CX, float cy, float distorsioncoeff [4]); void getmatrix34 (float cparam [3] [4]) const; const matrix33 & getintrinsic () const; const vector4 & getdistorsion () const; private: matrix33 m_intrinsic; vector4 m_distorsion ;};

Specific implementation:

To be passed in

Link:

Http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.