Camera Calibration 01

Source: Internet
Author: User

Pinhole camera model. The pinhole is an imaginary wall (the center has a tiny hole). Light can only pass through this opening, while the rest is blocked by the wall. Below we will start with a pinhole camera model and process the projection rays in the basic geometry. Unfortunately, the real pinhole is not a good way to get an image because it does not collect enough light to be quickly exposed. This is why both the eye and the camera use a lens instead of just one point to collect more light. However, the disadvantage of using lenses to get more light is not only to deviate from the simple pinhole geometry used, but also to introduce distortions from the lens.This article will learn how to use camera calibration to correct the main deviations from the pinhole model by using a lens. The importance of camera calibration is also that it is the bridge between the camera measurement and the real three-dimensional world measurement, the scene is not only three-dimensional, but also the space measured by the physical unit. Therefore, the relationship between the camera's natural units (pixels) and the physical world units (meters) is critical to the reconstruction of the three-dimensional scene.
The camera calibration process gives the camera geometry model and the lens distortion model. These two models define the internal parameters of the camera (intrinsic parameter). These models are applied below to correct lens distortion. The next article will use these models to illustrate the physical scenario.
The simplest pinhole model in the 1.1 camera model camera. F is the focal length of the camera, Z is the distance from the camera to the object, X is the length of the object, X is the image of the object on the image plane, can get-x/f=x/z
Re-organize the pinhole camera model into anotherequivalent form,Makes its mathematical form simpler. We swapped the pinhole and the image plane. At this point, the points in the pinhole are understood as the projection center. In this way, each light, starting at some point in a distant object, reaches the center of the projection plane. The intersection of the optical axis and the image plane is called the main point. :
The image of a distant object on the plane of this new image is exactly the same size as the image in Figure 11-1. The beam intersects the image plane to create an image, and the distance from the plane to the projection center is f. This gives the formula x/f=x/z. The minus sign is removed because the target image is no longer inverted. The main point is not the center of the imager, in fact, the center of the chip is usually not on the optical axis. We have therefore introduced two new parameters CX and CY. Models the possible offsets (to the axis). So the point Q in the physical world, whose coordinates are (x, Y, z)

Basic Projection GeometryThe process of mapping a physical point qi with coordinates (XI,YI,ZI) to a point on a projection plane (Xi,yi) is called a projection transformation, and this transformation makes it easy to use the homogeneous coordinates we know well. The homogeneous coordinates represent the point (n+1) dimension vectors (such as x, Y, Z to x,y,z,w) on the dimension n projection space, and the additional limit is that the cross-ratio of any two points is constant. Here, the image plane is a two-dimensional projection space, so a three-dimensional vector q= (Q1,Q2,Q3) can be used to represent the points on the plane. Because the intersection of all points on the projection space is constant, you can calculate the actual pixel coordinate value by dividing Q3. This allows us to rearrange the parameters of the definition camera (such as FX,FY,CX and CY) into a matrix of 3*3, which is called the internal parameter matrix of the camera. . then projecting the points in the physical world onto the camera, you can use the following formula:
The origin of Lens distortion: Using an ideal pinhole, we have a model that is useful for three-dimensional geometry in vision. However, because only a small amount of light passes through the pinhole, this causes the image to generate slowly in the actual situation due to insufficient exposure. For a camera to quickly generate images, large, curved lenses must be used to allow enough light to converge and focus on the projection point. To achieve this, we use a lens. The lens can focus enough light to a point, making the image more quickly generated. The price is the introduction of distortions.
2. Lens distortion The two main lens distortions are described below and modeled for them. The radial distortion comes from the shape of the lens, while the tangential distortion comes from the assembly process of the entire camera. 2.1 Radial distortion, the actual camera lens always on the edge of the imager has a significant distortion, the headache phenomenon from the "barrel" and "Fish Eye" effect.
For radial distortion, the distortion of the Center of the Imager (Optical Center) becomes 0, and as the moving edge moves, the distortions become more and more serious, the distortion is relatively small in the actual situation, and can be quantitatively described by the first few items of Taylor series unfolding around the r=0 position. For inexpensive IP cameras, we usually use the first two items, one of which is usually K1, and the second is K2, and for large distorted cameras, such as fisheye lenses, we can use the third radial distortion term k3. Usually, the radial position of a point in the imager is adjusted according to the following formula.
2.12 The second most common distortion is tangential distortion. This distortion is caused by defects in lens fabrication that make the lens itself not parallel to the image plane. Tangential distortion can be described with two additional parameters P1 and P2, as follows


CalibrationThe calibration method is to target the camera to an object with multiple independent identifiable points, which can be used to calculate the relative position and orientation of the camera and the internal parameters of the camera by looking at the object from different angles. To provide different perspectives, we rotate and move objects. Here are some points of rotation and movement of knowledgerotation matrix and shift vectorfor images of specific objects obtained by each camera, we can describe the relative positions of objects in the camera coordinate system by rotating and moving them.
In general, the rotation of any dimension can be expressed as the product of a coordinate vector with a square of the appropriate size. The final rotation is equivalent to the re-representation of the point position in another different coordinate system. The coordinate system rotation angle α is equivalent to rotating the target point around the coordinate origin in the same angle α, showing the description of the two-dimensional rotation with matrix multiplication
In three-dimensional space, rotations can be decomposed into two-dimensional rotations around their respective axes, where the measurement of the axis of rotation remains unchanged

so the characteristic of the R=rx,ry,rz rotation matrix R is that its inverse is its transpose, the translation vector is used to indicate how to move the origin of one coordinate system to the origin of another coordinate system, or the translation vector is the offset of the first coordinate system origin from the second coordinate system origin point. Therefore, moving from a coordinate system with the target Center as the origin to another coordinate system with the center of the camera as the origin, the corresponding translation vector is the t= target origin-the camera origin.

Camera Calibration

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.