Camera calibration under C #

Source: Internet
Author: User

One of the basic tasks of computer vision is to calculate the geometrical information of objects in three-dimensional space from the image information acquired by the camera, and to reconstruct and recognize the object, and the relationship between the three-dimensional geometric position of a point on the surface of a space object and its corresponding point in the image is determined by the geometric model of the camera imaging. These geometric model parameters are camera parameters. Under most conditions, these parameters must be obtained by experiment and calculation, which is called camera calibration. The calibration process is to determine the camera geometry and optical parameters, the camera's position relative to the world coordinate system.

Content:

1. Suppose there is a simple linear relationship between the image taken by the camera and the object in the three-dimensional space: [like]=m[], where matrix M can be seen as a geometric model of camera imaging. The parameter in M is the camera parameter. Usually the parameters are divided into the camera's internal and external reference.

2. Camera calibration methods are divided into three categories: traditional camera calibration method, active Vision camera calibration method, camera self-calibration method.

Traditional camera Calibration Method: Features: The use of known scene structure information, commonly used to the calibration block; Advantages: can be used for any camera model, calibration accuracy is high; insufficient: The calibration process is complex and requires high precision known structure information, in many cases, the calibration block cannot be used in practical applications.

Active Vision Camera Calibration Method: Features: Certain motion information of the known camera; advantages: Usually linear solution, high robustness, insufficient: not for the camera motion unknown and uncontrollable occasions.

Camera self-calibration method: The characteristics: only rely on the corresponding relationship between multiple images to calibrate; advantages: only need to establish the correspondence between the image, flexibility, the potential application of a wide range; insufficient: Nonlinear calibration, robustness is not high.

3. Camera Imaging Model: The image is the reflection of the space object through the imaging system, that is, the projection of the space object on the image plane. The gray level of each pixel in the image reflects the intensity of the reflected light on the surface of the space object, and the position of the point on the image is related to the geometrical position of the corresponding point on the surface of the space object. The relationship of these positions is determined by the geometric projection model of the camera imaging system, the ideal imaging model is the central projection in optics, and also becomes the pinhole model, that is to say that the reflected light on the surface of the object is projected onto the image plane by a pinhole, that is, satisfying the straight propagation conditions of the light. Small aperture imaging light transmittance is too small, it takes a long time exposure, the actual camera system is usually composed of a lens or lens group. Due to the complexity of the lens design and the process level, the actual lens imaging system can not strictly meet the pinhole model, which also produces the so-called lens distortion, common such as radial distortion, tangential distortion, thin prism distortion, far away from the center of the image will have a large distortion, in precision vision measurement and other applications, Non-linear models should be used to describe imaging relationships as much as possible.

4. Common coordinate system and its relationship: computer vision commonly used coordinate systems are defined by the right hand criterion, usually there are three different levels of coordinates: world coordinate, camera coordinate, image coordinate (image pixel coordinate system, image physical coordinate). As shown in the following:

(1) World coordinate system (OW-XW,YW,ZW): is the absolute coordinates of the objective world, the three-dimensional space coordinate system which the user defines arbitrarily, the general 3D scene all uses this coordinate system to represent.

(2) Camera coordinate system (OC-XCYCZC): a three-dimensional Cartesian coordinate system based on the focal center of the small aperture camera model and the camera optical axis ZC axis. X, y is generally xf,yf parallel to the physical coordinate system of the image and takes the front projection model.

(3) Ideal image coordinate system (Oi-xuyu)

(4) Actual image coordinate system (OI-XDYD)

The world coordinate system becomes the camera coordinate system (transformation of the three-dimensional space rigid body position):

, in the formula: the rotation matrix, and the translation matrix.

Camera coordinate system into ideal image coordinate system (Projection transform):

    

ideal Image coordinate system into the actual image coordinate system (Consider distortion): the lens distortion is mainly caused by radial distortion, two-stage radial distortion of the lens model is:

    

The homogeneous coordinates are expressed as:

    

In the formula: K1 and K2 are radial distortion coefficients; a ' is the equivalent transformation matrix of both.

The actual image coordinates are transformed into pixel image coordinates:

    

The homogeneous coordinates are expressed as:

    

(u0,v0) is the coordinates of the image center O1 in the O-uv, and the Sx and Sy are the number of pixels on the X and Y axes of the image plane respectively, and γ is the tilt factor between the two axes.

Can be obtained by the above formula:

    

Formula: M1 is determined by SX,SY,U0,V0,F,K1,K2, only related to the nature of the camera itself, these parameters are the internal parameters of the camera, the M2 is determined by the rotation matrix R and the vector T, the external parameters of the camera.

These are the core parts of the calibration algorithm. The cameracalibration is used in the EMGU to calibrate the camera, and the internal and external parameters are obtained.

The code is later attached:

Camera calibration under C #

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.