The first few chapters are expected to be theoretical knowledge. The code involved should also be a small part of what I wanted to do. Let's figure out the theory first. It would be much better to understand shader. So bless Gentlemen Cheng Yunchang Rong. All right. Start recording your study notes
1. What is the rendering pipeline? What is the connection to shader.
"The final purpose of the rendering pipeline is to generate or render a two-dimensional texture (2D picture) and display it on our screen"
The above is really a description of the information you see. The following is a personal understanding
Rendering Pipeline: A series of actions that are made from a three-dimensional scene (generation/rendering) of a two-dimensional texture (graph), a series of actions that render the assembly line
And shader is a member of this series of actions (the rendering pipeline).
2. Rendering Pipeline Diagram
There are three steps to rendering a pipeline from a concept:
Borrow the screenshot from the Unity Shader Essentials Guide here 1.1
As we can see from the picture. The starting point of the rendering pipeline is the application phase.
So what is the main thing in the application phase?
1. Load (graphics) data into video memory (refer to Figure 1.2 for detailed steps)
2. Set Rendering status
3. Call drawcall Figure 1.2:
What is the rendering status?
In simple terms, it defines how the mesh in the scene is rendered, such as using the shader, the light source properties, the material, and so on.
After the above work is completed. The CPU tells the GPU that it can start rendering according to the settings I tell you, which is Drawcall
Drawcall
After the GPU receives the Drawcall command. Calculations are made based on the rendering state and all input vertex data, and the final output produces those pixels displayed on the screen.
After the introduction of the CPU work, the following describes the GPU's work--gpu rendering (pipelining)
The latter two phases of the conceptual pipeline (Figure 1.1) are executed on the GPU, namely the geometry phase and rasterization phase. Directly above the diagram 1.3
From the picture we can see. The work in the GPU is also in line form. Because the geometry phase and rasterization phase are the work done in the GPU. So not all interfaces are open to program developers, programmability or configurable only for part of the stage geometry that should be on the pipeline :
Vertex shader: Fully configurable/programmable, typically used to implement features such as spatial transformations of vertices, vertex coloring, and more
Surface subdivision shader: Optional shader for subdivision elements
Geometry shader: An optional shader that is used for shading operations on a per-entity basis, or for producing more entities
Trim: Configurable, this stage is primarily used to cut out vertices that are not in the camera's field of view, and to remove patches from certain triangular elements. At this stage, for example, we can control the front or back of the clipping triangle by command.
Screen mapping: not configurable and programmable. Used to transform the coordinates of each entity into the rasterization phase in the screen coordinate system:
Triangle Setting and triangle traversal: All are done by a fixed function (fixed-function)
Slice shader: Fully programmable. Used to implement a per-slice shading operation
Per-chip operation: not programmable, but configurable, and highly configurable,