3D rendering Assembly Line

Source: Internet
Author: User

1. From the model coordinate system to the world coordinate system: after each vertex of the model is rotated and scaled, it is translated to the world coordinate position.

2. Hide surface elimination (optional): a test is usually executed to remove the back and enclose the ball in the world space.

2.1 Principle of ball surround test: for every object in the world space, create a ball surround and only perform the transformation from the world coordinates to the camera coordinates on the ball center, and determine whether the whole sphere is in the video body. If not, discard the object it is surrounded. If a part of the sphere is in the video body, perform other tests after the entire model is transformed to the camera coordinate.
2.1.1 surrounded ball: defines six points, and the line between them and the ball center after being transformed to a polar coordinate is parallel to three axes, respectively, the distance to the ball center is equal to the distance from the point to the ball center that is farthest from the ball center in the vertex list. To be more precise, you can choose other ry such as the box to replace the ball.
2.1.2. Determine whether the point (x, y, z) is outside the video intercept with a field of view of 90 degrees: (z> far_z) | (z <near Z) | (FABS (x) <z) | (FABS (y) <z)

2.2 working principle of the Back-side test: mark all the polygon that constitute each object in a unified way (clockwise or counterclockwise), and then calculate the surface normal of each polygon, test the normal line based on the observed vector. If the angle between the face normal and the observed vector does not exceed 90 degrees, that is, when the point accumulation is greater than 0, the polygon is visible to the observer. When the angle is 90 degrees, the polygon width is usually 1 pixel, which may cause problems during rendering. Therefore, it is excluded.

3. From the world coordinate system to the camera coordinate system: All the world coordinates are translated, and the world coordinates of the cameras whose translation volume is negative are then rotated three times for the three axes, the rotation angle is the negative value of the camera's corresponding angle. If you want to reverse calculate the world coordinates, first rotate and then translate. The translation matrix is the inverse matrix of the matrix with the world coordinates of the camera as the translation volume, and the rotation matrix is the transpose matrix of the rotation matrix between the camera and the corresponding axis. For each frame, you only need to manually calculate the transformation matrix once.

4 (optional), 3D cropping: You can crop a body (straight line or polygon) that is not completely inside the video clip. You can apply a projection transformation to the object to make it a cube. If you do not perform this step, perform Step 6. If you want to perform projection transformation on all the ry, you can perform 3D cropping after the projection transformation.

5. From the camera coordinate system to the projection Coordinate System: Set the video plane to Z = D (line of sight, distance from the viewpoint to the video plane), and the viewpoint to (0, 0), based on the proportional relationship of the similar triangle, point (x, y, z) Projection point (XP, YP): XP = D * x/Z, Yp = D * y/z. When z = 0, the coordinate of the projection point is infinite, and when Z is negative, the object is reversed, but it can be projected, which is one of the reasons for specifying the near-cropping surface. The Second coordinate must be used for Projection Transformation encoding.

5.1. 4D coordinate projection transformation matrix:

| 1 0 0 0 |

| 0 1 0 0 |

| 0 0 1 1/d |

| 0 0 0 0 |

Use vertices (X, Y, Z, 1) for verification. The result is (X, Y, Z, Z/d). Divide x, y, and z by the homogeneous coordinates, respectively, 3D coordinates:

(X * D/Z, y * D/Z, z * D/z). Ignore the zcoordinate. X and Y are the projection points.

5.2. Determine the D value of the non-square view:

D = 0.5 * width * Tan (FOV/2), Tan (FOV/2) = line of sight/(width/2)

XP = D * x/Z Yp = D * y * AR (Aspect Ratio)/Z

4d coordinate projection transformation matrix:

| D 0 0 0 |

| 0 D * AR 0 0 |

| 0 0 1 1 |

| 0 0 0 0 |

6. Image Space cropping (If Step 4 is not executed, perform this step): After all the objects to be rendered are converted to screen coordinates, use the screen space or the view to crop them.

7. Projection coordinate system to screen coordinate system:

7.1 when the field of view is 90 degrees and the line of sight is 1: In the projection space, the coordinate plane is normalized to-1 to 1 (square projection) on each axis, and the projection surface is square, projection transformation projects all ry to a 2*2 virtual video plane (XP and YP ranges from [-1, + 1]). When the projection plane is not square, the virtual video plane size is 2 * (2/AR), the XP range is [-1, + 1], and the YP range is [-1/AR, + 1/AR].

7.1.1. Map the range of X and Y to the screen:

XP-> Xs @ [0, screen_width-1]

YP-> ys @ [0, screen_height-1]

Note that the Y axis is reversed. The screen coordinates are at the origin in the upper left corner and downward in the positive direction of the Y axis.

7.1.2 set a = 0.5 * screen_width-0.5, B = 0.5 * screen_height-0.5

Xs = (XP + 1) * A = XP * A +

YS = 2 * B-(YP + 1) * B = B-YP * B

Corresponding 4D transformation matrix:

| A 0 0 0 |

| 0-B 0 0 |

| 0 0 1 0 |

| A B 0 1 |

Corresponding 3D transformation matrix:

| A 0 0 |

| 0-B 0 |

| A B 1 |

7.2. The field of view and line of sight are any values:

7.2.1. D value calculation: D = 0.5 * viewplane_width (height) * Tan (FOV/2)

Set the video plane to the same size as the screen: D = 0.5 * (screen_width (height)-1) * Tan (FOV/2)

7.2.2 convert camera coordinates to screen coordinates at a time:

Xs = XP + A = D * x/Z +

YS = B-Yp =-D * y * AR/Z + B

Transformation Matrix:

| D 0 0 0 |

| 0-D 0 0 |

| A B 1 1 |

| 0 0 0 0 |

7.2.2.1 for Projection Transformation and screen transformation using matrices, note that the projection transformation obtains homogeneous coordinates and must be converted to 3D sitting

.

7.2.2.2. After the projection transformation and screen transformation are combined, the aspect ratio is eliminated.

8. Others:

8.1 During coordinate transformation, do not change the coordinates, but use another array to store the results.

8.2. The horizontal and vertical visual fields are all 90-degree top-left and bottom plane equations of the captured body: | x | = z | Y | = z/AR.

8.3 after obtaining the world coordinates and before obtaining the screen coordinates, we must perform illumination calculation and texture ing on the polygon. This article only discusses coordinate transformation, so ignore it.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.