? Vertex Shader and Fragment Shader are programmable pipelines.
? Vertex array/buffer objects: vertex data source. At this point, the vertex input of the rendering pipeline is usually more efficient with buffer objects . In today's demo sample. For simplicity's sake. The Vertex Array is used;
? Vertex Shader: vertex shader implements the operation of vertices in a programmable manner. For example, coordinate space conversion, calculation of per-vertexcolor and texture coordinates;
? Primitive Assembly: primitive Assembly , the vertex after shader processing is assembled as the basic entity in the picture assembly phase.
OpenGL ES supports three basic primitives: dots, lines, and triangles, which can be rendered by OpenGL ES. The assembled elements are then cropped (clip): The entities that are completely in the viewport are retained, the entities that are not in the view cone are discarded, and half of the entities that are not in the viewport are cropped, and then the elements in the frustum are Culling processing (cull): This process can be encoded to determine whether to reject the front. Back or all culling.
? rasterization: rasterization. In the rasterization phase, the base element is converted to a two-dimensional slice (fragment), and fragment represents pixels that can be rendered to the screen. It includes information such as position, color, and texture coordinates, which are calculated by interpolating the vertex information of the entity. These elements are then sent to the chip shader for processing. This is a qualitative process from vertex data to pixels that can be rendered on a display device.
? Fragment Shader: The chip shader implements the operation of the element in a programmable way. At this stage it receives the fragment after Rasterization. Color Depth value, template value as input.
? per-fragment Operation: At this stage a series of tests and processing is performed on each slice of the chip shader output, which determines the pixels that are finally used for rendering.
vertex transformations and lighting (T&L)
before an object is drawn to the screen, it must first calculate its illumination and convert it from the 3D world to the screen's two-dimensional coordinate system (these two processes are called lighting and vertex transformations. i.e. T&l, Transformation & Lighting).
? World Transformation
The world transformation is the transformation of object vertex coordinates from model space to world space.
panning Transformations
Rotate Transformations
Rotate the θ angle around the x -axis
Rotate the θ angle around the y -axis
Rotate the θ angle around the z axis
Zoom Transformations
? Observing Transformations
Take the camera position as the reference point. The camera is viewed in the direction of the axis and the coordinate system established is called the observing coordinate system
? Projection Transformations
Projecting a three-dimensional object onto a two-dimensional surface, which is projected onto the film of a virtual camera, is a projection transformation. The space coordinate system, which takes the film Center as the reference origin, is called the projected coordinate system, and the coordinates of the object in the projected coordinate system are called projection coordinates
? Viewport Transformations
objects are represented in a projected coordinate system as floating-point coordinates. By defining the screen display area ( typically the form size is displayed). The process of converting floating-point coordinates to pixel coordinates is called a viewport transformation, and the pixel coordinate value is called the screen coordinate.
For example, if the definition community size is 640 pixels wide and 480 pixels high, then the projection coordinates (1.0F,05.F) go through the Community Transform screen coordinates (640,240), assuming that the viewport size is defined to be 1024 pixels wide. 800 pixels high, the community transforms the screen coordinates to (1024,400).
? vertex shader (vertexshader)
? Attributes: Encapsulates data for each vertex using a vertex array. Generally used for each vertex is not the same variable, such as vertex position, color and so on.
? Uniforms: constant data used by the vertex shader. cannot be altered by shaders, typically used for all vertices in a single 3D object consisting of the same set of vertices , such as the position of the current light source.
? Samplers: This is optional, a special kind of uniforms. Represents the texture used by the vertex shader.
? Shaderprogram: The source code or executable file of a vertex shader that describes the operations that will be run on the vertex.
? Varying: The Varying variable is used to store the output data of the vertex shader. Of course, the input data of the slice shader is also stored, and the varying variable is finally linearly interpolated during the rasterization process.
The vertex shader assumes that the varying variable is declared. It must be passed to the chip shader to be further passed to the next stage, so the varying variable declared in the vertex shader should declare the same type of varying variable again in the slice shader. OpenGL ES 2.0 also stipulates that the maximum number of varying variables that should be supported by all implementations should not be less than 8.
Entity Assembly
after the vertex shader. The next stage of the rendering pipeline is the element assembly, which is a geometry that can be drawn with the OpenGL ES drawing command, which specifies a set of vertex properties describing the geometry and primitive types of the entities. Vertex shaders Use these vertex properties to calculate the position, color, and texture coordinates of vertices so that they are propagated to the slice shader.
In the element assembly phase. The vertices processed by these shaders are assembled into separate geometric entities, such as triangles, lines, and point sprites. For each entity, you must determine whether it is in the viewport (the visible area of the 3-D space displayed on the screen). Suppose that the element part is in the optic vertebral body. It is necessary to crop, assuming that the elements are all in the visual vertebral body, directly discard the elements. After cropping, the vertex position is converted to screen coordinates.
The back culling operation will also run, depending on whether the entity is front or back. The entity is discarded if it is back.
After clipping and back culling operation. Into the next stage of the rendering pipeline: Rasterization.
rasterization and pixel processing
The rasterization phase transforms the elements into a meta-set, which is then submitted to the slice shader processing, which represents the pixels that can be drawn to the screen.
For example, as seen in:
? Chip Shader (fragmentshader)
The chip shader implements a general programmable method for the element, which operates on each element generated during the rasterization phase.
? Varying Variables: The Varying variable of the vertex shader output is computed after rasterization interpolation to act on the value of each slice.
? Uniforms: Constant data used by the slice shader
? Samplers: A special kind of uniforms. Represents the texture used by the slice shader.
? ShaderProgram: The source code or executable file of the element shader, describing the operation that will be run on the element I.
Edit Article-Blog channel-csdn.net