Edit Article-Blog channel-csdn.net

Source: Internet
Author: User

Tag: Style color uses strong IO file data for



? Vertex Shader and Fragment Shader are programmable pipelines.

? Vertex array/buffer objects: Vertex data source, which renders the vertex input of the pipeline, usually using Buffer objects is more efficient. In today's example, for simplicity, the Vertex Array is used;

? Vertex Shader: vertex shader implements the operation of vertices in a programmable way, such as coordinate space conversion, calculation Per-vertexcolor and texture coordinates;

primitive Assembly : elements Assembly supports three basic primitives: dots, lines, and triangles, which can be rendered by OpenGL ES. The assembled elements are then clipping ( clip ): Preserves entities that are completely in the viewport, discards entities that are not in the view cone at all, and crops half of the entities that are not in half And then handling cull This process can be coded to determine whether to reject the front, or the back or all.

? rasterization: rasterization. In the rasterization phase, the base element is converted to a two-dimensional slice (fragment), and fragment represents a pixel that can be rendered to the screen, which contains information such as position, color, texture coordinates, which are interpolated from the vertex information of the entity. These elements are then sent to the chip shader for processing. This is a qualitative process from vertex data to pixels that can be rendered on a display device.

? Fragment Shader: The chip shader implements the operation of the element in a programmable way. At this stage it accepts fragment, color, depth value, and template value as input after rasterization.

? per-fragment Operation: At this stage, a series of tests and processes are performed on each piece of the chip shader output to determine the final pixel to render.


vertex transformations and lighting (T&L)

before an object is drawn to the screen, it must first calculate its illumination and convert it from the 3D world to the screen's two-dimensional coordinate system (these two processes are called lighting and vertex transformations, i.e. T&l, Transformation & Lighting).

?  World Transformation

The world transformation is the transformation of object vertex coordinates from model space to world space.

panning Transformations


Rotate Transformations

Rotate the θ angle around the x -axis


Rotate the θ angle around the y -axis


Rotate the θ angle around the z axis


Zoom Transformations


? Observing Transformations

With the camera position as the reference origin, the camera is viewed in the direction of the axis, and the coordinate system established is called the observing coordinate system

? Projection Transformations

Projecting a three-dimensional object onto a two-dimensional surface, which is projected onto the film of a virtual camera, is a projection transformation. The space coordinate system, which takes the film Center as the reference origin, is called the projected coordinate system, and the coordinates of the object in the projected coordinate system are called projection coordinates

?  Viewport Transformations

objects are represented in a projected coordinate system as floating-point coordinates, and the process of converting floating-point coordinates to pixel coordinates is called a viewport transform by defining the screen display area (typically the display window size), which is called the screen coordinate. For example, if you define a community size of 640 pixels wide and 480 pixels high, then the projection coordinates (1.0F,05.F) go through the Community Transform screen coordinates (640,240), if the viewport size is defined to be 1024 pixels wide, 800 pixels high, and the community transforms the screen coordinates to (1024,400).


? vertex shader (vertexshader)


? Attributes: Encapsulates data for each vertex using a vertex array, typically for variables with different vertices, such as vertex position, color, and so on.

? Uniforms: The constant data used by the vertex shader, which cannot be modified by the shader, is typically used for variables of the same set of vertices in a single 3D object, such as the position of the current light source.

? Samplers: This is optional, a special uniformsthat represents the texture used by the vertex shader.

? Shaderprogram: The source or executable of a vertex shader that describes the actions that will be performed on the vertex.

Varying: The Varying variable is used to store the output data of the vertex shader and, of course, the input data of the slice shader, and the Varying variable will eventually be linearly interpolated during the rasterization process. Vertex shader If you declare a varying variable, it must be passed to the slice shader to further pass to the next stage, so the varying variable declared in the vertex shader should re-declare the varying variable of the same type in the slice shader. OpenGL ES 2.0 also specifies that the maximum number of varying variables that should be supported by all implementations should not be less than 8.


Entity Assembly


after the vertex shader, the next stage of the rendering pipeline is an element assembly, which is a geometry that can be drawn with the OpenGL ES drawing command, which specifies a set of vertex properties that describe the geometry and entity type of the entity. Vertex shaders Use these vertex properties to calculate the position, color, and texture coordinates of vertices so that they can be propagated to the slice shader. During the element assembly phase, the vertices processed by these shaders are assembled into separate geometric entities, such as triangles, lines, and point sprites. For each element, you must determine whether it is in the viewport (the visible area of the 3-dimensional space displayed on the screen), and if the elements are in the viewport, they need to be cropped, and if the elements are all outside the viewport, the entities are discarded directly. After cropping, the vertex position is converted to screen coordinates. The back culling operation also executes, depending on whether the entity is front or back, and if it is back, discards the entity. After clipping and back culling, it enters the next stage of the rendering pipeline: Rasterization.



rasterization and pixel processing


The rasterization phase transforms the elements into a meta-set, which is then submitted to the slice shader processing, which represents the pixels that can be drawn to the screen. As shown in the following:



Chip Shader (fragmentshader)


The chip shader implements a general programmable method for the element, which operates on each slice generated during the rasterization phase.

? Varying Variables: The Varying variable of the vertex shader output is computed after rasterization interpolation to act on the value of each slice.

? Uniforms: Constant data used by the slice shader

? Samplers: A special uniforms that represents the texture used by the slice shader.

? ShaderProgram: The source code or executable file of the element shader that describes the action that will be performed on the slice I.





Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.