Perspective Projection, evaluate with Z buffer
Why is there a perspective. Because the eyeball is a lens. If the earth's biological evolution relies on ultrasonic testing space, then maybe the eyes will not become balls, but other shapes...
Why do people feel dizzy when they play 3D? One of the important functions is that the eye is not completely a lens. Therefore, when the visual field is greater than 60 degrees, the deformation of the projection around the screen is more important than that of the eyeball projection retina. In addition, the human brain is used to correcting the information of eye projection. Suddenly there was a rough simulated eye imaging on the screen, and the brain could not adapt for a while.
Z buffer numeric calculation and Perspective Projection Matrix settings. Using d3d or OpenGL, you can directly let the video card complete these tasks. But how to calculate the Z buffer and the principle of the Project Matrix. It is very useful for advanced image processing. For example, the shadowmap application. At present, many people have made a lot of efforts on how to increase the shadowmap precision to get a good shadowmap (change the perspective Project Matrix when generating the shadowmap to generate a shadowmap with a more reasonable precision ). For example, the perspective shadow map of the perspective space, the light-Space Perspective shadow of the light space, and the trapezoidal shadow map of the perspective shadowmap variant, correct the logarithmic shadow maps that are projected as the logarithm parameter. In addition, the shadow volume in doom3 uses an infinitely far plane perspective matrix to draw the stencel shadow volume. You must have a thorough understanding of perspective projection.
The following describes the principles of Z buffer calculation and Perspective Projection Matrix.
Assume that the coordinates in World Space are
PW = {XW, YW, ZW}
Obtained Through camera space transform
Pe = {XE, ye, ze}
Then convert it to device space through Project Transform. Here we assume that it is in the ZP range [-] (Z buffer of openg)
PE projection on the near plane is:
Xep = N * Xe/(-ze) (N is the distance from the near plane to the eye ).
Note that OpenGL uses the right-hand camera space as the orientation of the eyes in the negative direction of the Z axis. When calculating the projection, X and Y should be divided by a positive value. Therefore, the ze value is negative.
The purpose of this operation is to allow all vertices in the pyramid in the view space to intercept the X value to the near plane, left to right on the near plane.
The following describes the values of X, Y, and Z in device space. device space is a cube from-1 to 1. X/2 + 0.5, Y/2 + 0.5 multiplied by the window length and width respectively indicate the position of pixels on the screen. Then the Z value of the pixel on the screen can be obtained using the following method:
You need to convert the frustum of (left, right, top, bottom, near, far) in view space to the square body with the length and width (-). That is to say, You need to convert XE, ye, ze is mapped to the range of [-1, 1. We have obtained the xep projection of Xe on the camera space near plane. The value of this projection is the position on the near plane of the frustum plane.
It is easy to map the near plane of the split body to a square of-. The X direction is mapped as follows:
XP = (xep-left) * 2/(right-left)-1.
When xep changes from left to right, XP changes from-1 to 1.
Because the difference between the scanning line of the graphics card hardware (GPU) is perspective correction, after considering the projection, the linear relationship between XP and YP is 1/(-ze. In the Unit Cube of the device, ZP ranges from-1 to 1, and ZP is the value of Z buffer. According to the previous derivation, to ensure perspective correction, ZP also has a linear relationship of 1/(-ze. That is, you can always find a formula to make
ZP = A * 1/(-ze) + B.
That is to say, no matter what a and B are, the values in Z buffer are always linearly related to the-1/ze of the object vertex in camera space. What we need to do is to request a and B.
For A and B, we use the following conditions:
When ze = far, ZP = 1
When ze = near, ZP =-1 (in OpenGL. ZP = 0 in d3d)
This is actually a two-element equation. With two known solutions, you can find a and B. OpenGL,
A = 2nf/(F-N), B = (F + N)/(F-N)
In this way, we will know how to obtain the Z buffer value. First, convert the world coordinate pw of the object vertex to the camera space to obtain the PE. Then, after the perspective projection transformation, the value of Ze-> ZP is obtained. Fill in this value in Z buffer, which is the basis for the video card to compare which pixel is in the front and which pixel is in the back.
This is why Z fighting is easy to occur when the near and far settings are inappropriate. Generally, a 90% Z floating point precision is used away from the screen. When it is used to render scenes with a long distance from the perspective, the depth discrimination only depends on the remaining 10% accuracy.
Specific derivation can look at the http://www.cs.kuleuven.ac.be/cwis/research/graphics/INFOTEC/viewing-in-3d/node8.html
The W buffer gradually abandoned by d3d. The distance of the scenario is linear with the W value. That is, the distance of 100 meters. When near = 1 far = 101, each meter is about 1/100 in the d3d W buffer representation. However, in the Z buffer, the allocation is not even.
The following describes the Perspective Projection Matrix.
Based on the linear algebra principle, we know that a 3x3 matrix cannot be used to perform perspective ing on vertices (x, y, z. A 3x3 matrix cannot be used to obtain the X/Z format. Then we introduce the homogeneous coordinate matrix-4x4 matrix. Vertex coordinates (x, y, z, W ).
In homogeneous coordinates, vertices (X, Y, Z, W) are equivalent to (x/W, Y/W, Z/W, 1 ). When we see the vertex coordinates, we will think of the ZP value we finally obtained and the XP coordinates in the unit device space. Using Matrix Multiplication, we can obtain a matrix MP, so that the Coordinate Transformation of the vertices (xe, ye, Ze, 1) is the (XP, YP, ZP, 1 ). That is:
Vp = MP * ve.
VP is the vertex coordinate of the unit device coordinate system (XP, YP, ZP, 1 ). Ve is the coordinate of the camera space vertex (xe, ye, Ze, 1 ).
Considerations
XP = (xep-left) * 2/(right-left)-1 (xep =-N * Xe/ze)
Yp = (Yep-left) * 2/(right-left)-1 (Yep =-N * Ye/ze)
ZP = A * 1/ze + B
To obtain 4x4 matrix, we need to convert (XP, YP, ZP, 1) to homogeneous coordinates (-XP * Ze,-YP * Ye,-ZP * Ze, -ze ). Then, the projection matrix can be obtained by using the matrix multiplication formula and the preceding known coordinates.
XP * (-ze) = M0 M1 M2 M3 XE
YP * (-ze) = M4 M5 M6 M7 x ye
ZP * (-ze) = M8 M9 M10 M11 ze
-Ze = m12 M13 m14 M15 1
Here, let's take the solution of M0, M1, M2, and M3 as an example:
M0 * XE + m1 * Ye + m2 * ze + m3 = (-ze) * (-N * Xe/Ze-left) * 2/(right-left) + Ze
M1 = 2n/(right-left)
M2 = 0
M3 = (right + Left)/(right-left)
M4 = 0
Finally, get the Perspective Projection Matrix of OpenGL:
[2n/(right-left) 0 (right + Left)/(right-left) 0]
[0 2 * near/(top-bottom) (top + bottom)/(top-bottom) 0]
[0 0-(far + near)/(far-near)-2far * near/(far-near)]
[0 0-1 0]
The left-hand Perspective Projection Matrix of d3d differs from OpenGL in the following ways.
1. d3d device space is not a cube, but a flat box. The Z range is only [0, 1]. The range X and Y are still [-].
2. The Z axis of the d3d camera space is oriented to the positive direction. xep = N * Xe/(-ze) is not required for projection in the camera space, but xep = N * Xe/Ze.
3. In d3d, the Unit (flat box) photography from camera space's visual vertebra to device space adopts a strange practice. Map the upper-right corner of frustum to the location (, 0) of the device unit.
Please re d3d Perspective Projection Matrix derivation process ~
If you have time to continue perspective correction, such as texture mapping, Phong shading, and bump mapping