Graphics Rendering Pipeline

Source: Internet
Author: User

This article records "real-time rendering" of reading notes.

The pipeline of graphics rendering consists of three stages: application stage, geometric stage and rasterization stage.

1. Application phase

The application phase is driven by the application and is therefore executed by the software and is run on the CPU. Depending on the application, this stage includes collision detection, global acceleration algorithm, animation, physical simulation, and so on.

2. Geometry phase

The geometry phase is responsible for the operation of each polygon and each vertex, and usually this phase can be divided into the following functional phases: Model and Visual transformations-vertex shading--projection-cropping-screen mapping.

model Changes : Each model has its own coordinate system, we want to put the model into the time coordinate system, we need to pass the model transformation. Each model has a model transformation associated with it, and this model can be used to place the model in the appropriate position in the world coordinate system. Visual Transformations : All models in a scene must be rendered in the visual range of the camera, and this camera is placed in a world coordinate system. To facilitate the calculation of the back clipping and projection, we need to transform the world coordinate system into a camera coordinate system, that is, transforming the world coordinate system so that the camera is at the origin of the coordinate system, the x axis is pointing to the right, the Y axis is pointing upwards, and the z axis (some may use the +z And this process is done through visual transformations. Vertex Shading: This operation that determines the effect that light produces on a material is called shading. It involves computing the shading equation at different vertices, and the resulting results will be used for rasterization. projection : Transforms the Visual space (view volume) into the unit cube, the coordinate range of the cube ( -1,-1,-1) to (1,1,1), which is called the normal visual space (canonical view volume). Includes two projections, orthographic projections, and perspective projections. The projected coordinate system is called the standard equipment coordinate system (normalized device coordinates). Although this process is transformed from one space to another, the term projection is still used because the z-coordinate is not stored in the image (but is saved in Z-buffer) after it is displayed. From this perspective, this is a transformation from three-dimensional to two-dimensional. This refers to the coordinates after the display, in fact, the post-projection coordinates are three-dimensional. cropping : only vertices of all or part of the base element appear in the visible space before being propagated to the rasterization stage and then displayed on the screen. When all the vertices of an entity are in the visible space, the entity is directly propagated to the rasterization stage, and if all the vertices are not in the visible space, the entity is discarded directly, so only those elements that are part of the vertex in the visible space need to be cropped. Unlike other programmable stages, this phase is typically manipulated by fixed hardware. Screen mapping : After this phase the coordinates will change from three dimensions to two, where the two-dimensional coordinates will become the coordinates on the screen. Before DX10, the center coordinate of each pixel is "0.0", and the center coordinate of each pixel after DX10 and OpenGL is "0.5" in the form. In addition, OpenGL takes the lower-left corner as the origin, and DX takes the upper-right corner as the origin point. 3. Rasterization phasegiven transformations and projected vertices and their corresponding shading data, the goal of rasterization is to calculate the pixel color that is overlaid on each object. This stage can also be divided into several functional stages: triangle setting--triangle traversal--pixel shading--merging. triangle Settings : The difference between the triangular surface and other related data is computed. Fixed hardware operation to complete. Triangle Traversal : This phase checks to see if each pixel is inside the triangle, and fragment is generated at this stage. Interpolation methods are usually used. Pixel shading: Use interpolated shading data as input and output one or more color values for use in the next phase. This stage can be programmed to control GPU execution, and the most common technique at this stage is texture technology. Merge : The color values for each pixel are stored in the color buffer and are color-merged at this stage. This phase is not fully programmable, it can be configured to produce different effects. This phase will be responsible for resolving visibility (visibility) issues through Z-buffer. In addition to the color buffer and z-buffer, there are other buffer that also works at this stage, such as the alpha channel, which is used to execute the Alpha Test,stencil buffer, which is used to record the location of the rendered entity.

Graphics Rendering Pipeline

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.