From http://www.ahcit.com/lanmuyd.asp? Id = 2677 for record preparation. If you have any version problems, please let us know.
The constraints on the Pole Line in camera calibration and stereo matching are described clearly.
A three-dimensional reconstruction solution based on opencv (School of electrical and Information Engineering, Shaanxi University of Science and Technology, Xi'an 710001, China)
AbstractThis paper takes the computer vision 3D reconstruction technology as the research object, analyzes the 3D Reconstruction Model in the open computer vision function library opencv, through six steps, in particular, the use of Pole Line constraints in camera calibration and stereo matching provides a three-dimensional reconstruction algorithm based on opencv. This algorithm makes full use of the functions of the opencv function library, improves the computing accuracy and efficiency. It has good cross-platform portability and can meet the needs of various computer vision systems.
KeywordsComputer Vision; 3D reconstruction; opencv; camera calibration; polar line Constraints
1 Introduction3D reconstruction is a hot and difficult topic in cutting-edge fields such as computer vision, artificial intelligence, and virtual reality. It is also one of the major challenges facing humans in basic research and application research. Image-based 3D reconstruction is an important branch of image processing. As the basis of popular VR and scientific visualization, it is widely used in detection and observation. A complete 3D reconstruction system can be divided into six parts: Image Acquisition, camera calibration, feature extraction, stereo matching, depth determination, and post-processing [1] [3]. Precise Calibration of internal and external parameters of the camera and stereo matching are the most important and difficult problems in 3D reconstruction.Open source computer vision Library (opencv) is developed by Intel's Russian Research Laboratory, it is a free library composed of some c Functions and C ++ classes, used to implement some common image processing and computer vision algorithms [2]. Opencv is compatible with the image processing library developed by Intel. IPL is used for low-level processing of digital images, opencv is mainly used for advanced processing of images, such as feature detection and tracking, motion analysis, Object Segmentation and recognition, and 3D reconstruction. Because the source code of opencv is completely open, and the source code is concise and efficient, most of the functions have been optimized by assembly, to make full use of the design system of Intel processing chips
For MMX, Pentium, Pentium III and Pentium IV processors, opencv code execution efficiency is very high. Therefore, it has been widely used in foreign image processing fields in recent years, become a popular image processing software. The camera calibration module in opencv provides users with good interfaces and supports Windows and Linux platforms, which effectively improves development efficiency, fast execution speed, and has good cross-platform portability, therefore, it can be well applied to engineering practice. 2 Basic Principles of 3D reconstruction 2.1 Image AcquisitionThe acquisition of stereoscopic images is the foundation of stereoscopic vision. There are many ways to obtain images, depending on the application scenario and purpose, we also need to consider the effects of viewpoint differences, lighting conditions, camera performance, and scenery characteristics to facilitate three-dimensional computing. 2.2 camera calibration [4]Camera calibration is used to establish an imaging model, determine the camera location and attribute parameters, and determine the correspondence between object points in the spatial coordinate system and their image points. Calibration is required for both cameras. if the camera is fixed, only one calibration is required when three-dimensional information is derived from the 2-D computer image coordinate.Camera calibration methods can be divided into two categories: the first type is to directly estimate the camera position, optical axis direction, focal length and other parameters; the second type is through the least square method fitting, determine the ing of 3D spatial points to the transformation matrix of 2D image points. An effective camera model not only accurately restores 3D information of spatial scenes, but also helps to solve the problem of stereo matching. 2.3 Feature ExtractionThe key to determining 3-D information is to determine the correspondence between the same point in a scene and different images based on the parallax of multiple viewpoints. One of the methods to solve this problem is to select appropriate image features for matching. Features are a collection of pixels or their abstract expressions. Common matching features are mainly like features, linear features, and regional features. Generally, large-scale features are rich in information and contain a small number of features, which is easy to quickly match. However, their extraction and description are relatively complex, and their positioning accuracy is poor; small-scale features have high positioning accuracy and simple expression and description, but a large number of features contain less information. Therefore, strong constraints and matching strategies are required for matching.Reasonable Selection and matching features are of great significance for Stereo Matching. Various factors should be considered comprehensively and selected based on different characteristics and application requirements. Generally, for scenarios with a large number of irregular shapes and height variations, it is more suitable for extracting Point features, because it is difficult to extract features such as line segments and regions, and it will introduce errors: for scenarios with a regular structure, if the extraction and description of line segments and regional features are relatively easy and the error is small, we should extract the line segment features to achieve rapid matching. 2.4 stereo matching [5]Based on the calculation of the selected features, stereo matching establishes the correspondence between features, maps the same spatial point to the image points in different images, and obtains the corresponding parallax image, stereo matching is the most important and difficult problem in binocular vision. When a three-dimensional spatial scene is projected into a two-dimensional image, the images of the same scene at different points of view are very different, and there are many factors in the scene, such as lighting conditions, geometric shapes and physical characteristics of scenes, noise interference and distortion, and camera characteristics, are all integrated into a single gray value. Therefore, it is very difficult to accurately match images that contain so many unfavorable factors without ambiguity.There are two types of stereo matching methods: gray-scale correlation and feature matching. Gray-scale correlation is directly matched by Pixel gray scale. The advantage of this method is that the matching results are not affected by the feature detection accuracy and density, and high positioning accuracy and dense parallax surface can be obtained; the disadvantage is that it depends on the gray-scale statistical characteristics of the image and is sensitive to the surface structure and illumination reflection of the scene. Therefore, there is a lack of sufficient texture details on the surface of the scene in the space and the imaging distortion is large (such as the baseline length is too large) is difficult. The advantage of feature-based matching is that the features obtained from the intensity image are used as matching elements, so they are relatively stable when the environment Lighting changes. The disadvantage is that feature extraction requires additional computation, in addition, because features are discrete, dense parallax fields cannot be obtained after matching.The matching method must solve the following problems: select the correct matching feature, find the essential attributes between features, and establish a stable algorithm that can correctly match the selected features. 2.5 determine depth informationAfter obtaining the parallax image through stereo matching, you can determine the depth image and restore the scene's 3-D information. The main factors that affect the Distance Measurement Accuracy include camera calibration error, digital quantization effect, and Feature Detection and matching positioning accuracy. Generally, the distance measurement accuracy is proportional to the matching positioning accuracy, it is inversely proportional to the baseline length of the camera. Increasing the baseline length can improve the depth measurement accuracy, but also increase the differences between images and increase the difficulty of matching. Therefore, to design a precise stereoscopic visual system, we must consider all aspects of the system to ensure that each link has a high accuracy. 2.6 Post-processing [6]Post-processing includes deep interpolation, error correction, and precision improvement. The ultimate goal of stereoscopic vision is to restore the complete information on the visual surface of a scene. Currently, no matter which method of matching is used, the parallax of all image points cannot be restored. Therefore, for a complete stereoscopic vision system, final Surface Interpolation and reconstruction are required. 3. opencv-based 3D ReconstructionThe calibration method used in opencv [2] is a method between the traditional calibration method and the self-calibration method, which is proposed by Zhang zhengyou in his paper [3. This method does not need to know the specific motion information of the camera, which is more flexible than the traditional calibration technology. At the same time, a specific calibration object and the coordinates of a group of known feature elements are still required, this is not as flexible as self-calibration. It obtains the image of the calibration object at least three different positions and calculates all internal and external parameters of the camera. Because it is more flexible than the traditional calibration technology and can get a good calibration accuracy, it is adopted by opencv.Three Coordinate Systems will be used in the calibration process of this model: image coordinate system, camera coordinate system and world coordinate system. Through the transformation between the coordinate system, you can use the following formula to set the points of the image coordinate system and the world coordinate system [7] [8]:Matrix A contains all six internal parameters of the camera. Therefore, a is called the parameter matrix of the camera. PC is an External Parameter Matrix of the model, which can be obtained through the following formula:
The rotation matrix is the translation vector.
The camera calibration based on opencv adopts a universal checkerboard calibration template. First, the cvfindchessboardcorners () function is used to roughly extract the corner points of the checkerboard, and then the findcornersubpix () function is used, obtain the coordinate values of the sub-pixel level of the corner point. Finally, place the coordinate value into the cvcalibratecamera2 () function to obtain the internal and external parameter values of the camera (effect 1 is shown ). Figure 1 extracted and displayed corner points (the chessboard is taken from opencv)
Opencv has several operators for edge detection, such as Sobel, Laplace, and canny. However, edge detection and feature extraction are usually performed using the Kan Nyi operator, that is, the cvkan () function (2 ). Figure 2 Comparison chart after performing the canny Processing
The most difficult and important part of 3D reconstruction is the three-dimensional matching part. In opencv, select the polar Constraint Method Based on Feature Matching [9].
Assume a little space
PThe projection points on the imaging plane of the two cameras are
P1 and
P2, 3. Where,
C1 and
C2 is the center of the two cameras, that is, the origin of the camera coordinate system.
In geek ry, we call
C1 and
CThe line 2 is the baseline. Intersection of baseline and two camera imaging planes
E1 and
E2 is the pole of the two cameras. They are the centers of the two cameras.
C1 and
C2. Projection coordinates on the corresponding camera imaging plane. P,
C1 and
CThe Triangle Plane composed of 2 is called the polar plane π. π and the intersection of two camera imaging planes
L1 and
L2 is called the polar line.
L1 is a point
P2 corresponding pole,
L2 is the point
P1 corresponds to the polar line,
L1 and
L2 corresponds to each other. Figure 3
We take another point of p' on the pole plane π. We can see that the projection point on the plane of the two cameras is
P1 and
P2', where,
P2 and
P2' is on the pole
L2. This is the baseline constraint.
P1. Its Matching points must appear on the corresponding pole. Therefore, we can compress our search space to a one-dimensional straight line, that is, the pole line. In opencv, You can first use the cvfindfundamentalmat () function to obtain the basic matrix of the image, and then use the basic matrix obtained into the cvcomputecorrespondepilines () function () the function is used to obtain the baseline corresponding to the point in an image in another image.
After the polar line is obtained, the gray similarity of the pixel points along the polar line on the image is matched, so that the matching point of the point on the corresponding image can be easily found. 4. Experiment results
Based on the above principles and opencv functions, a complete 3D reconstruction system is developed using vc6.0. Through the above six steps, the image of the object is finally restored. The program is strictly tested and runs stably. When marking a camera, note that the more images (at least three images) the more accurate the internal and external parameters are. In addition, the optical axes of any two images cannot be parallel. Figure 4 matching process of contour map 6 Extracted from image 5 of the left and right maps used in the experiment (marked as a pair of vertices in the diagram) Fig 7 points reconstruction (reconstructed using OpenGL) 5 conclusion and outlook
3D reconstruction vision, as an important branch of computer vision, has always been one of the key points and hotspots of computer vision research. It directly simulates the human visual processing of the scene, and can flexibly measure the three-dimensional information of the scene under various conditions. It is of great significance both in Visual Physiology and engineering application. 3D reconstruction vision technology has great superiority in obtaining object depth information from two-dimensional images of objects.
The 3D reconstruction system developed by opencv has the advantages of simple computing, accurate results, high operation efficiency, and cross-platform. The system can be effectively applied to various computer vision applications.
This test system is suitable for three-dimensional measurement of space objects with a relatively large measurement range and a small amount of occlusion. For scenarios with severe occlusion, we need to increase the number of cameras, camera objects in more directions, and 3D reconstruction using binocular vision in multiple directions. References [1] Park j s. interactive 3D reconstruction from multiple images: a primitive-Based Approach [J]. pattern Recognition letters, 2005, 26 (16): 2558-2571 [2] Intel Corporation. open source computer vision Library Reference Manual [s]. 2001-12 [3] Ma songde, Zhang zhengyou. computer Vision-Computing Theory and Algorithm basics [m] Beijing: Science Press, 2003 [4] Mao Jianfei, Yan xiyong, Zhu Jing. improved Two-Step calibration of the camera using a plane template [J]. journal of image Graphics of China, 2004, 9 (7): 846-852 [5] Xu Yi, Zhou Jun, Zhou Yuanhua. stereo Visual matching technology. computer Engineering and application, (15): 1-5 [6] pollefeys M, Koch R, Van Gool. self-calibration and metric reconstruction in spite of varying and unknown internal camera parameters [C]. proc. of International Conference on computer vision, Bombay, India, 1998: 90 [7] Hartley r I, zisserman. multiple View Geometry in computer vision [M]. cambridge University Press, 2000 [8] Wu fuchao, Li Hua, Hu Zhanyi. A new camera self-calibration method based on active Vision System [J]. chinese Journal of computers, 1130 (11): 1139-[9] Wen gongjian, Wang runsheng. A robust straight line extraction algorithm [J]. journal of Software 2001, 12 (11): 1660-1666
Fund Project: Shaanxi Provincial Department of Education Special Scientific Research Project (05jk145)
Receipt date: January 1, March 19 modification date: January 1, March 28
Author profile:Li Jian (1975-), male, Professor and doctor from Pucheng, Shaanxi Province, focuses on computer vision. Shi jin (1983-), male, Yiyang, Hunan province, with a master's degree. His research interests include computer vision and robot vision.