After three examples in the previous series of articles, although the code is simple, I think there are still some points that I don't understand. Now we have to go back and add some necessary mathematical and graphics knowledge.
1. In the first example of a graphic pipeline, I mentioned that vertex coloring and fragment coloring are a part of the entire graphic rendering process. The entire process is called a pipeline. All the links of this pipeline include:
In the entire pipeline, only Vertex coloring and fragment coloring are programmable, vertex data and frame caching are specific data, and the rest is a fixed function link, that is, CG cannot be used for programming.
2. Data Stream
3. Semantics and special parameters (Uniform) have been exposed to semantics in the previous article. The meaning of semantics can be understood as giving physical meaning to common vectors or scalar values, such as a vector) if there is no semantics, I can regard it as a small ball with a speed of 1 meterssecond, or I can think of it as a normal vector of a straight line y =-X. If we add semantics to this vector (), such as (): speed, or (): normal, then our program will know the physical characteristics of this vector, at least they will not be confused.
Uniforms is a specific parameter provided by unity. They also have vectors, flags, and matrices that exist independently of fragments, vertices, and elements, if the mesh they comprise is understood as a huge universe, these unims ms are like physical laws in the big universe, applicable to any vertex, fragment, and element, and the values are the same.
4. vertex Transformation
Before learning about vertex transformation, we need to understand the vertex coloring tool and its subsequent steps. The ultimate goal is to transform the vertices of geometric elements (such as triangles) from the model coordinate system to the display coordinate system.
This should be a deep image for people who are new to unity. Create a stereo image in the scenario, and then create a master camera, so what kind of computing and interweaving have the final picture seen by the game passed through this stereo image and the camera parameters?
The entire vertex transformation process is divided into five steps:
Note that the first three transformations are completed in the vertex shader, while the perspective transform domain window transformations are completed in the subsequent steps. That is to say, only the first three transformation processes are programmable.
The three matrices used by the first three transformations can be obtained through the uniform parameter, and unity also provides an MVP parameter, that is, the three matrices are integrated, directly complete the transformation from the model coordinate system to the crop coordinate system.
Interpretation of the shader series written by CG in unity-theoretical knowledge