Visibility judgment refers to the ability to quickly calculate the visible content in a scenario based on the given camera position and direction, and render the visible part of the scenario. Visibility judgment mainly involves the following types of judgments:
1. Determine the front and back of the triangle. The front and background of the triangle are determined by the hand direction:
Left-hand Coordinate System: when the three vertices on the triangle are arranged clockwise, the front is displayed in the direction indicated by the normal vector, and the back is displayed in the opposite direction of the normal vector.
Right-hand Coordinate System: when the three vertices on the triangle are arranged counterclockwise, the front is displayed in the direction indicated by the normal vector, and the back is displayed in the opposite direction of the normal vector.
The algorithm used to determine the front and background of a triangle is 2D back-face culling (selected on the back of 2D ).
2. Determine invisible surfaces in the ry. Determine the surface that is blocked by other faces in a ry Based on the camera's position.
The algorithm for determining invisible surfaces in a ry is 3D back-face culling (3D background selection ).
3. Determine the occlusion between the geometries. 3D space ry can be completely or partially blocked by other ry.
4. Determine whether the ry is within the range of the video clip. Ry that is not within the range of the video clip will be removed from the rendering pipeline and will not be rendered.
As can be seen above, visibility judgment is done in layers. It can be divided into the following layers:
1. pixel-level sealing failure, pixel-level visibility judgment. This judgment is in the GPU stage. The GPU uses the value in the depth buffer (depth buffer, also called Z-buffer) to determine the occlusion of pixels.
2. Triangle-level sealing failure, triangle-level visibility judgment. This judgment can be in two different stages: in the GPU stage, the GPU executes 2D back-face culling; In the CPU stage, the CPU executes 3D back-face culling.
3. Object-level ventricular septal change, which determines the visibility of ry. This check is in the CPU phase, and the CPU determines the occlusion between the ry and whether the ry is in the video body.
4. Area-level (lsv4), region-level visibility judgment. This judgment is in the CPU stage. By expressing the region in the scenario into some data structures, a set of ry can be separated from the visible set. The main algorithms are spatial segmentation, including the four-tree, eight-tree, and two-tree.
5. Scene-level ASD, scene-level visibility judgment. This judgment is in the CPU stage. The judgment object is in multiple scenarios, and invisible scenes will not be rendered.
The following describes the main algorithms used in different stages:
1. pixel-level ASD
The main algorithm here is Z-buffer. There is a storage area called depth buffer inside the GPU, which stores pixel-Level Depth Information, the color information of a pixel corresponds to the position. When rendering is required, compare the depth information of the existing pixel with the depth information of the new Pixel. if certain conditions are met, the color information of the new Pixel will replace the original color information, the new Pixel depth information will replace the original depth information.
2. Triangle-level ASD
Here, we mainly use the algorithm to calculate the dot product of the Plane Method vector and the camera's direction vector. If we set the plane method vector to N and the camera's direction to C, then:
DOT (N, C)> = 0 visible;
DOT (N, C) <0 invisible;
3. Object-level ASD
At this level, tasks can be divided:
1. frustum culling: determines whether a ry is in the video body. ry that is not in the video body will be removed.
2. Object culling: determines the occlusion problem between ry. ry completely obscured by other ry will be removed.
3. Detail culling: uses different levels of details to render the ry based on the distance between the ry and the camera location.
4. Depth culling: based on the distance between the ry and the camera location, the ry at a certain distance from the camera will be removed.
4. Area-level ASD
The main algorithm used here is to divide the space and describe it with a certain data structure. common algorithms include the four-Cross Tree, the eight-Cross Tree, and the two-Cross-space split tree.
4.1 Tree
4.1.1 tree construction
First, construct a cube. The cube contains at least all the ry in a specific area of the scene or scene, and uses the Cube as the root of the tree ). Then the cube box is evenly divided into eight small cubes of equal size, which are further divided into eight smaller cubes. The cube generated during the execution of this process becomes an octal split, and each octal split is connected to the Cube box that generates it. Each geometric object in the scenario is connected to the octal split that completely contains it. In the tree constructed from this, the intermediate node represents the octal split of the ry, and any empty octal split (I .e. the octal split that does not contain any ry is removed). Each leaf node represents the ry. The number of repeated executions in the entire split process is called the depth of the octree, you can use the maximum depth of an octree or the minimum number of ry contained in a cube as the criteria for stopping the split.
4.1.2 working principle of the tree
By traversing the tree from top to bottom (top-down), all its subnodes are invisible as long as they know that a node in the tree is invisible, this can be deleted from the visual set. The visibility of the cube box is determined by the calculation of the intersection between the cube box and the video block, namely, frustum culling.
First, determine the visibility of the root node. If the observed position is within the boundary of the octree, the root node is always visible.
Then judge the visibility of the intermediate node:
1) if the cube is located in the video clip body, the geometric objects connected to the node and the subnodes of the node are visible.
2) If the cube box overlaps with the video clip, the corresponding visibility judgment is made on the boundary body of the geometric object connected to the node, then perform the same visibility test on the sub-nodes of the visible node.
3) if the cube box is invisible, all the geometric objects connected to the node and the subnodes of the node are invisible.
4.2 binary split tree (BSP)
What I know about the split tree in the binary space is not very clear. I am learning ....
5. Scene-level ASD
The main algorithm used here is the door system technology to remove invisible scenes.
References:
1. mathematical knowledge in 3D games and computer graphics, by Eric Lengyel
2. "mathematic for game development" by Christopher themlay
--------------------------- Dedicated to multimedia technology, become a thoughtful software engineer ------------------------
This article is my original work. If you want to reprint it, please contact me or indicate the source.
You are welcome to give your valuable comments on the content of the article, and hope that you will promptly point out the mistakes in the article so that I can correct them in time.
My contact information:
QQ: 7578420
Email:Jerrydong@tom.com
Bytes ----------------------------------------------------------------------------------------