What has been done to the various stages of the graphic rendering pipeline? It is always unclear. Every time I read the book, I still read the book after a while. I did a good job in this CG tutorial, specially picked for reference. -By Shenzi/2010.3.25
3D graphic rendering pipeline
What is rendering)
A simple understanding of rendering may be like this: it is to convert the description of a 3D object or a 3D scene into a two-dimensional image, and the generated two-dimensional image can well reflect a 3D object or a 3D scene (1): Figure 1: Rendering
What is a rendering pipeline? A rendering pipeline, also known as a rendering pipeline, is a parallel processing unit that allows the display chip to process graphical signals independently of each other. A pipeline is a sequence of stages that can be performed in parallel and in a fixed order. Each stage receives input from the previous stage, and sends the output to the subsequent stage. Just like an assembly line made by different vehicles at different stages at the same time, the traditional graphic hardware assembly line processes a large number of vertices, geometric figures, and fragments in the form of water. Figure 2 shows the graphic hardware pipeline used by today's graphics processors. 3D applicationsProgramThe vertices passed to the graphic processor form different geometric elements: Typical polygon, line segments, and points. As shown in figure 3, there are many ways to create geometric elements. Figure 2: graphic hardware rendering pipeline Figure 3: geometric chart type
1. vertex Transformation ):
Vertex transformation is the first processing phase of the graphic hardware rendering pipeline. Vertex transformations perform a series of mathematical operations on each vertex. These operations include converting the vertex position to the screen position for use by the grating, generating texture coordinates for the texture, and illuminating the vertex to determine its color. Coordinates in vertex Transformation:
Coordinate System:
Figure 4 coordinate system and transformation for vertex Processing
Object Space:
The application specifies the vertex position in a coordinate system called an object space (also called a model space. When an artist creates a 3D model of an object, he selects a convenient direction, ratio, and position to place the component vertex of the model. The space of an object does not have any relationship with the space of other objects.
World Space:
The space of an object has no spatial relationship with other objects. The purpose of world space is to provide an absolute reference for all objects in your scenario. How to create a world space coordinate system can be selected at will. For example, you can decide that the origin of the world space is the center of your room. However, objects in the room can be placed in a certain proportion and direction relative to the center of the room.
Modeling Transformation:
The method for placing a specified object in the object space into the world space depends on modeling transformation. For example, you may need to rotate, translate, and scale the 3D model of a chair so that the Chair can be correctly placed in the world coordinate system of your room. Two chairs in the same room can use the same three-dimensional chair model, but different modeling transformations are used to place each chair in different positions in the room.
Eye space:
Finally, you need to view your scene from a special point of view ("eyes. In a coordinate system called an eye space (or visual space), the eyes are located at the origin of the coordinate system. The direction toward "up" is usually the positive direction of the axis. Following the standard practice, you can determine the direction of the scene to make the eyes look down from the Z axis.
Visual Transformation:
Visual transformation from the world space location to the eye space location. A typical visual Transformation combines a translation to move the position of the eyes in the world space to the origin of the eye space, and then rotate the eyes properly. In this way, the view transformation defines the position and direction of the viewpoint.
We usually combine two matrices representing modeling and visual transformation to form a separate matrix called modelview. You can simply combine them with a modeling matrix multiplied by the visual matrix.
Crop space:
After the position is in the eye space, the next step is to determine where the image is visible in the final rendering. The coordinate system after the eye space is called the cut space, and the coordinate system in this space is called the cut coordinate.
Projection Transformation:
The transformation from the coordinate of the eye space to the cut space is called the projection transformation. The projection transformation defines a view frustum, which represents the visible area of objects in the eye space. It is possible to be visible only when the polygon, line segment, and point back are raked into a picture.
Standardized device coordinates:
The crop coordinates are in the homogeneous form <X, Y, Z, W>, but we need to calculate a two-dimensional position (a pair of X and Y) and a depth value (depth value is used for depth buffering, a hardware-Accelerated Method for rendering visible surfaces ).
Perspective Division:
W Division X, Y, and Z can be used to complete the job. The generated result coordinates are called standardized device coordinates. Currently, all geometric data is normalized.
Window coordinates:
The last step is to take the standardized device coordinates of each vertex and convert them into the final coordinate system that measures X and X using pixels. This step is named view transformation, which provides data for the raster of the graphics processor. The grating then forms a vertex, line segment, or polygon from the vertex and generates the fragment that determines the final image. Another is known as the transformation of the depth range transformation. zooming the Z value of the vertex is within the range of the Depth cache used in the depth buffer.
Ii. Primitive assembly and Rasterization) Transformed vertex streams are delivered to the next phase called metaassembly and raster in sequence. First, the vertex is assembled into a geometric element based on the geometric element classification information of the accompanied vertex sequence in the assembly phase. This will generate a sequence of triangles, line segments, and points. These elements need to be cropped to the visible plane (a visible area in 3D space) and any cropping plane specified by an effective application. The grating can also discard polygon based on the forward or backward direction of the polygon. This process is called "culling ). After cropping and selecting the remaining polygon, the polygon must be rasterized. Grating is a process that determines which pixels are overwritten by geometric elements. Polygon, line segments, and points are raked based on the rules specified for each element. The raster result is a set of pixel positions and fragments. After raster, the number of vertices owned by an element has no relationship with the generated fragment. For example, a triangle composed of three vertices occupies the entire screen, so millions of clips need to be generated. The difference between fragments and pixels becomes very important. Terms
PixelsPixel is short for image elements. A pixel represents the content at a specified position in the frame cache, such as color, depth, and other values associated with this position. One
Fragment(Fragment) is a potential state for updating a specific pixel. The reason for the term fragment is that grating splits the pixels covered by each geometric element (such as a triangle) into pixels. A clip has an associated pixel position, depth value, and interpolation parameter, such as color, second (reflection) color, and one or more texture coordinate sets. These various interpolation parameters come from transformed vertices that form a geometric element used to generate fragments. You can think of fragments as potential pixels. If a fragment passes a variety of raster tests (discussed in the raster operation), this fragment will be used to update the pixels in the frame cache.
3. interpolation, textures, and coloring
When an element is raked into a pile of zero or multiple fragments, the interpolation, texture, and coloring phases perform a series of texture and mathematical operations when the clip attributes are required, then a final color is determined for each clip. In addition to determining the final color of the clip, this phase also determines a new depth, or even discards the clip to avoid updating the pixels corresponding to the frame cache. This stage is allowed to discard fragments, which generate one or no colored fragments for each input segment it receives.
4.Raster operations) Before the last frame cache update, perform the last series of operations on each segment. These operations are a standard component of OpenGL and direct3d. At this stage, the hidden area is eliminated by a process called Deep test. Other effects, such as mixed and template-based shadows, also occur at this stage. The grating operation phase checks Each clip Based on Multiple tests, including shear, Alpha, template, and depth tests. These tests involve the final color or depth of the clip, the position of the pixel, and some pixel values (the depth value of the pixel and the template value ). If any test fails, the segment will be discarded at this stage, and the color value of the pixel will be updated (although a template write operation may occur ). After the deep test, you can replace the pixel depth value with the segment depth value. After these tests, a blending operation combines the final color of the clip with the color of the corresponding pixel. Finally, a frame cache write operation replaces the pixel color with a mixed color. Figure 5 shows that the grating operation phase is actually a pipeline. In fact, all the stages described earlier can be further decomposed into sub-processes. Figure 5: standard OpenGL and direct3d grating operations
5. Visualized graphic Assembly Line Figure 6 describes the stages of the graphic assembly line. In this figure, the two triangles are rasterized. The entire process begins with vertex transformation and coloring. Next, let's explain how to create a triangle from the vertex, as shown in the dotted line. Then, the grating fills the triangle with fragments. Finally, the value obtained from the vertex is used for interpolation, and then for texture and coloring. Note that many fragments are generated from just a few vertices. Figure 6: visualized graphic Assembly Line
Programmable graphic Assembly Line The most obvious trend in today's graphics hardware design is to provide more programmability within the graphics processor. Figure 7 shows the vertex processors and chip (pixel) processors in the assembly line of a programmable graphics processor. Figure 7 shows more details than Figure 2. More importantly, it shows that vertex and fragment processing are separated into programmable units. The programmable vertex processor and Fragment Processor are the hardware units that execute vertex shader and pixel shader in graphics hardware. Figure 7: programmable graphic Assembly Line
References: 1. The CG tutorial) 2. OpenGL programming guide
3. Network
PS: http://blog.csdn.net/shenzi/archive/2010/03/25/5417488.aspx