Concept of 3d graphics with rendering pipeline

Source: Internet
Author: User

Fundamentals of GPU and shader technology (all 8 back)

Http://www.opengpu.org/forum.php?mod=viewthread&tid=7376&extra=page%3D1

Http://www.opengpu.org/bbs/forum.php?mod=viewthread&tid=7550&extra=page%3D1

concept and rendering pipeline for 3D graphics (render Pipeline)

The 3D graphics history is described earlier, and the next step is to explain the process of 3D graphics.

flowchart for 3D graphics pipeline

Figure 1 is a process model for 3D graphics. This is a GPU process model that corresponds to DirectX 10/sm4, but some processes are sometimes more granular and sometimes abbreviated, depending on the GPU.

first, explain why the processing of 3D graphics becomes such a root cause. Will be this way, because in the long and short-lived 3D graphics history, this part needs to be handled most smoothly, and more importantly because the GPU is designed to be easier to implement. This process doesn't have much difference either in Direct3D or OpenGL.


Figure 1 Rendering process inside the GPU

CPU-Responsible 3D graphics processing part = Game engine?

The parts of Figure 1 [1] and [2] are mainly processed in the CPU.

Configure a 3D object or move it and set it up again, because these two are very similar, so the parts that are processed in the system are all called [game engines].
    
in the game engine, follow the keyboard input, mouse input, game control input let 3D character move, or shoot hit the enemy hit when the collision detection, according to the result of the collision, although to carry out the physical simulation of the 3D characters, but these are part of the game logic, some means, is and [the] The same part.

In addition, in [2] if there is a GPU corresponding to DirectX 10/sm4.0, if you can take advantage of geometry Shader, it can also be done on the GPU, such as about particle or billboard such a point sprite, The processing of GPU intervention can be achieved by processing the creation and destruction with geometry shader. Even so, in general 3D game processing, this part is still handled by the CPU.

What is the coordinate system of Vertex pipeline and Vertex shader?

The [3][4][5][6] portion of the red line in the graph is the vertex pipeline for vertex-related processing.

usually starting from here is the part of processing within the GPU. However, in order to simplify the internal logic to reduce costs, so that the graphics function to integrate, is the so-called [unified chipset], the transfer of this vertex pipeline to the CPU (simulation) system is also present.

well, until a little earlier, many times the vertex pipeline was called [geometry processing]. The so-called Geometry (Geometry) is [geometry]. In high school, though, you will learn [vector] and linear mapping (Linear map) or linear transformation (Linear transformation) in mathematics or [algebra/geometry], but that's the thing about the world. In a word of gossip, the name of Nvidia's Gpu,geforce series is a noun coined by the abbreviation of "geometric Force (the Power of geometry), which can be said to have a pun of [G-force gravity].

Turn the topic back, and say back to the concept of the [3-dimensional vector] on 3D graphics, and simply think of it as "direction" on the [three-dimensional space]. These "directions" are represented by the coordinate values of the 3 axes of x, Y, z, and are called [coordinate systems] based on these "directions".

"Local coordinate system", if the specific description is for a 3D character, set as the datum coordinate system, the 3D role is the direction of the 3D role is the datum coordinate system, by processing [direction is where], the control of the person will be very easy, so take advantage of the concept of local coordinate system.

By the way , although many of the 3D characters are armed with arms and feet, it is easier to understand if the joint-based local coordinate system is used to control the joints, but in this case the local coordinate system is made into a multi-layered structure, and the final treatment becomes difficult to understand.

Next, it is necessary to dominate the overall coordinate system of the 3D global space, which is the [world coordinate system]. Transformations from a local coordinate system to a world coordinate system occur more than once when you are working with vertex units in a 3D graphic's vertex pipeline.

the resulting vertices are transformed from the coordinate system according to the Shader program, which is the [3][vertex shading Language][vertex Shader]. With shader programming, unique and special coordinate system transformations can be performed.

 
Figure 2 Conceptual diagram of the coordinate system

another work of Vertex shader: Shadow handling of vertex spaces.



The work of vertex shader in figures 1 through 3 is not just a coordinate transformation, but another important work is the shading and illumination processing (lighting) of the vertex space.

" coordinate transformation" is the "mathematical" feeling, the so-called "computational" impression is easy to imagine and understand. However, in the computer to do light, that is, "light exposure", such an impression is difficult to imagine what is going on. Of course, the GPU is not a camera but a computer, it is not possible to directly shoot the effect of light after the photo, so it needs to be calculated to obtain.

once light shines on an object, it is reflected/expanded and absorbed there. If the object has a color or pattern, then perhaps you can see this color, if the light is irradiated with color, then you can see and the object color or the color of the pattern after the synthesis. The basic idea of computer graphics is to seek such computational processing.

How can this process be put into the calculations that the computer is good at? The actual use of vector operations is also.

The 3 vectors are used to display the [light vector] in the direction of the light and the [line of sight vector] showing the direction of the line of sight, as well as the [normal vector] that shows the vertex direction of the polygon that constitutes the light exposure. The vectors are then mapped to the reflection equations that represent reflection by the direction of light and line of sight.

This reflection equation, can show the desired various types of materials, the reflection equation in the program is the performance of vertex shader program. And, the program that actually executes this vertex unit reflection equation is vertex Shader.

in the vertex shader, not only can the shadow of the vertex space be processed, but also the texture (Texture) coordinates of the polygon can be computed. The calculation of texture coordinates is the calculation of how the polygon, how the texture of the material through what way to paste up. Also, when actually doing texture mapping (Texture Mapping) is in [8][9] Piexl shader, here is just the preparation of texture mapping.



Vertex Shader Work example, use Vertex shader to do refraction performance.

Geometry shader can increase or decrease the vertex of a powerful thing

before DirectX 9/sm3.0, the GPU does not contain the vertex information of the geometry shader,3d model, which is pre-prepared by the CPU side and then entered into the GPU, where the GPU cannot be arbitrarily added or subtracted.

to break this "principled limit", a shader with the ability to freely add and subtract vertices is [Geometry Shader].

The shader program allows you to specify geometry shader to increase or decrease the vertex information. Also, because the actual increase or decrease is a complex number of vertices, so the various segments, polygons, particles and other elements can also be increased or decreased.

the use of various methods of geometry shader is created, because you can freely generate polygons, then you can grow grass polygons on the ground, or let 3D characters grow hair, etc. is the most basic way to use. In the game, you can also do not need to do logic interactive processing such as spark, such as the performance of special effects, using geometry shader to generate.

"maxima liver Note: Geometry shader is not as good as thought, or propaganda." Probably because of the cost or other reasons, Geometry shader is usually implemented in display driver, that is, the CPU is responsible for computing, when re-return to the GPU VS, the impact on the pipeline is very large, so Geometry The actual performance of shader is not high, even very low. "

It is generally possible to do this.

The vertices generated with geometry Shader can be returned to vertex Shader, so the returned vertices can be re-processed. For example, the normal method can not be achieved, the low-poly 3D model, in the geometry shader by interpolation to produce a smoother high polygon, theoretically feasible.


the last of the vertex pipeline processing

[5][6] is the final processing part of the vertex pipeline and is responsible for making the final preparations before drawing.

The coordinates in the world coordinate system are further transformed to the processing of the camera viewpoint coordinate system on [5]. This is equivalent to the part of the camera's composition and lens when taking a photo, and this series of processing is generally [perspective transform processing].

because 3D graphics only need to draw out the image captured in the field of view can be, once [5] after the end of processing, it is transferred to the field of view space as the main consideration.

[6] is judged, will not need to depict the polygon, before entering the actual depiction of the processing of the pixel pipeline, the processing of culling (also known as culling processing).

[clipping processing], is to completely outside the field of view of the 3D model of the polygon culling, if the 3D model of the polygon is only part of the field of view, then the field of view of the polygon is divided processing.

"Maxima liver Note: In order to avoid expensive view Frustum Clipping, once this happens, the cost is:
Extra vertices produced,costing more bandwidth
CPU cost for interpolation of X, Y, Z, u,v,color,specular,alpha and Fog
breaking up of strips and fans
Poor Vertex locality of new Vertices,which hurts CPU and vertex cache coherency
please refer to: Guard Band. "
  

"Back culling", that is, there is no vertex toward the direction of the point of view, theoretically from the viewpoint should be invisible to the multilateral type to be removed. However, in cases where transparent objects are involved, such treatment can sometimes lead to uncoordinated problems.

"Maxima liver: that is, independent Transparency, from the point of view when the model, may occur due to the vertex in the order of reasons, the back of the vertex in front of the vertex is drawn before the problem, so alpha blend is not correct. "

a rasterizer responsible for decomposing and sending pixel units (Rasterrizer)

when the transform to the field of view space, the useless polygons are removed, on [7] is, to the absence of actual shape of the polygon, drawn to correspond to the image of the pixel processing. In addition, in the latest 3D graphics, not just to draw the display frame, but also to render the scene to the texture of the situation, this time [7] will be the polygon and texture pixel processing.

The processing on [7] is essentially the result of the output of the vertex units (polygon units) from the vertex pipeline, which is decomposed as a pixel unit, and the continuous sending of tasks to the pixel pipeline can be said to be an intermediary role.

this [7] processing is referred to as [triangle Setup], or [rasterise processing], because it is a normalized processing, so from the beginning of the 1990 's GPU as a fixed function implemented in the GPU, and now there is not much evolution.

Usually, a polygon is plotted with more than one pixel, so the polygon is decomposed into a large number of pixel tasks by rasterization. The number of pixel shader processing units on the GPU is so many because the pixels that need to be processed are so numerous.



Figure 5:rasterise can also be said to be a task book for generating Piexl shader, and a multi-pixel task will be generated by a multilateral model.

pixel Shader, Texture (Texture) is not just an image for pixel unit shading



shading The pixel units generated by [rasterise processing] from [7] is the Pixel shader part of [8][9], and the entire frame containing the render backend is collectively referred to as the [pixel Pipeline (Pixel Pipeline )]。

due to the wide variety of GPU implementations, not only is the function module called [Pixel Shader] in the shadow processing of pixel units in [8], but also the case of [9] of the subsequent texture unit [Texture Unit] is collectively referred to as Pixel Shader.

So, although the calculations are performed on the pixel shader [8], the actual units are pixels, and the content of the processing itself is very similar to vertex shader.

in pixel units, the reflection equation is also calculated using the light source vector, the ray vector, and the normal vector of the pixel. By calculation, the color of this pixel is [Pixel shader illumination].

The effect of illumination, which is interpolated from the lighting results of the vertex units, results in a smoother shadow and a nice highlight in the pixel units, which is specifically referred to as [Per Pixel Lighting].

by the texture coordinates obtained on the vertex shader, it is the [9] Texture Unit (Texture unit) that is responsible for reading the Texel (Texel) from the original texture (Texture).

The pixel color taken out of the texture unit is processed together with the pixel color calculated from the shadow processing of the previous texture unit, resulting in the final pixel color.

The method by which the pixel unit illumination is obtained is the shader program used by pixel shader.

The usual [texture], although can be associated with the image on the polygon, but now the programmable shader era, the application has been expanded, in addition to the texture can store ordinary images, but also can store mathematical (or physical) meaning of the use of various data rise. When using pixel shader to perform shadow processing on pixel units, the method of reading data from such a numeric texture is used in the calculation.

because it is the same as the 8bit pixel ARGB of the PC screen, the texture is made up of the ARGB 4 color elements. For example, a 32-bit color texture is allocated as a (transparency) 8bit,r (red) 8bit,g (green) 8bit,b (blue) 8Bit. When the value needs to be stored in the texture, argb means that it can hold a vector or a matrix of up to 4 elements, and if it is a 3-dimensional vector data, simply place the value of 3 elements of XYZ into the RGB of ARGB.

In the actual pixel shader processing, by storing the vector data in the texture, let the line of sight vector, the light source vector, the normal vector combine to solve the special reflection equation, this way can realize the special material performance.

Figure 6 shows an example of placing a normal vector into a texture's [normal map] to implement a bump map (bump map). This is because in the back of the serial will be explained in detail, here only to understand [Pixel Shader's work is such a feeling] on the line.


Figure 6:pixel Shader's working legend, complete the concept map of the bump mapping process. From Heightmap to normal map transformation is performed in pixel shader. The normal map holds the normal vector, and each 1 texel corresponds to the three-dimensional normal vector values of 1 xyz representations (there is also a method for storing xy, calculating z).

rendered back-end render backend


the output of the Pixel shader is simply that [the pixels in the scene that make up the polygon will determine the final color], although it is conceivable that the [Pixel's drawing processing is done] as is to the memory, but there is still some work to be done.

that is the [10] render backend, and Nvidia calls this part the ROP Unit. ROP unit is rendering Output pipeline or raster operation abbreviation is not too clear, in this series the former as the correct explanation.

Nonetheless, this is the pixel shader output [is not a verifiable validation], and when writing [deciding how to write], these are the control parts written to the memory. Pixel shader can read from the texture itself, but cannot write directly to the memory,

with this treatment also becomes an extremely important part.

However, since the number of pixel shader on previous GPUs of DirectX 9/sm2.0 is usually consistent with the number of ROP units, there is an impression that pixel shader and ROP unit are one by one correspondence. But starting with the DirectX 9/sm3.0 GPU, along with pixel

The shader program is highly complex and the number of Pixel shader is increased as a priority, with the result that the number of ROP units is generally less than the number of Pixel shader.


The latest GPU is generally the structure of the pixel Shader > ROP unit. Is the structure of the GeForce 7800 GTX. The middle is 4x6 = 24 pixel Shader, the following is 16 ROP Unit.

[is the validation that can be written] the section is [Alpha test], [stencil test], and [Depth test].

The Alpha test is a test of whether the pixel color of the output is completely transparent. The alpha element is 0 transparent and does not need to be drawn, which means that the drawing of the corresponding pixel is discarded.


Alpha test draws only the opaque part of the pixel.

stencil test can be detected in a variety of ways, to stencil the contents of buffer to calculate the frame buffer, for the set conditions if not passed, will be the pixel to give up drawing. One of the applications, as stencil Shadow Volume

Shadow Generation technology.


stencil test refers to the contents of the stencil buffer, which satisfies the pre-defined test conditions, and is a case where [stencil Buffer's content is drawn only in part a].

The depth test is to check all the visible pixels to be drawn, which are the closest pixels from the viewpoint. And the mapping pixel one by one corresponds to the value of the depth to hold the z Buffer, the depth value read from here and the depth of the pixel to be drawn to compare, is [depth measurement

The actual state of the test), the depth value is obtained by pixel shader calculation.

In addition, when plotting pixels of a translucent 3D object, there is also a case where this depth test is not performed.


Depth testing is especially important when the space position is characterized by a different sequence of objects. If the picture effect is determined only by the order in which the objects are drawn, the order of the objects will be chaotic.


unconditional writing is required for places where depth values have never been written in Z buffer, and for deep testing of locations where depth values have already been written, by detecting whether the desired pixels are at the top, and then deciding if they should be drawn.

there are also [Alpha Blending] and [Fog] changes to [whether validation can be written].

alpha blending, and it is not possible to draw the pixels directly, and the current position has been written to the pixel to do a semi-transparent calculation and then write back the processing. The pixel color is read from the rendered object's frame buffer because the processing of the video memory needs to be read, so the Alpha

blending requires more load than normal pixel processing. Those 3D test software are based on continuous drawing of overlapping translucency as a performance evaluation.


Alpha blending because the rendered results are read out and then plotted further, the load is high.

Fog (Fog) is processed by adjusting the blend color of the pre-set fog according to the depth value of the plotted pixel. The farther away the pixel color is, the closer it is to the white, the air that can be seen in the depths of the scene.


Fog, the farther away the depth value is, the more blurred the pixel value of the air near the representation.

of course, when alpha blending and fog are not processed, the pixel color can be directly written to the video memory. And at the time of writing, in order to prepare for the next pixel depth test, and to update the depth value.

In addition, the pixels in the screen are arranged in a lattice type, in order to reduce the image of the sawtooth sense, in the rendering of the Backen

Concept of 3d graphics with rendering pipeline

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.