This is a creation in Article, where the information may have evolved or changed.
The GOLANG.ORG/X/MOBILE/GL implementation is the OpenGL ES 2 package.
Reference: HTTPS://GODOC.ORG/GOLANG.ORG/X/MOBILE/GL
OpenGL ES (OpenGL for Embedded Systems) is a subset of the OpenGL three-dimensional graphics API, designed for embedded devices such as mobile phones, PDAs, and game consoles. OpenGL ES 1.0 For fixed pipeline hardware, OpenGL ES 2.0 for programmable pipeline hardware, can be considered to be completely two sets of APIs. The latest 3.0 version has also been supported in the Android4.3, from the source view fully expanded from 2.0.
OpenGL is responsible for converting objects in three-dimensional space into two-dimensional images by projecting, rasterization, and then rendering to the screen.
The term pipeline describes the entire process of OpenGL rendering. OpenGL uses CS Model: C is cpu,s is gpu,c to s input is vertex information and texture information, s output is the image displayed on the monitor.
Reference: http://blog.csdn.net/myarrow/article/details/7692044
Graphics Rendering Pipeline (Pipeline)
The rendering pipeline, also known as the rendering pipelining, is a parallel processing unit that displays the internal processing graphics signals of the chip independently of each other. The main idea is to generate a screen display on the GPU after a series of processing. The biggest difference between OpenGL ES2.0 and 1.0 is the introduction of a programmable rendering pipeline in which the vertex shader and the slice shader replace the previous transformations, lighting, textures, color summation and so on, which need to be programmed for their own implementation, greatly improving flexibility.
Name implies
- The vertex shader handles the vertices of the polygon,
- The slice shader handles each slice within the polygon, similar to the pixel point.
Shader is used in the shader language GLSL, with cross-platform features, although OpenGL and OpenGL ES color language is a little different, but Android,ios and the web can be common.
Reference:
Http://www.zhouchengjian517.com/opengl-es2.0-for-android/
Everything in OpenGL is in 3D space, but screens and windows are a 2D pixel array, so most of OpenGL's work is about how to turn 3D coordinates into 2D pixels that fit your screen. The process of converting 3D coordinates to 2D coordinates is performed by OpenGL's graphical rendering pipeline (Pipeline, mostly pipelines, actually refers to a bunch of raw graphics data that passes through a pipeline, during which the process of processing the screen eventually occurs through various changes). A graphical rendering pipeline can be divided into two main sections:
- The first section converts your 3D coordinates to 2D coordinates,
- The second part is to turn 2D coordinates into actual colored pixels.
The graphics rendering pipeline receives a set of 3D coordinates and then turns them into colored 2D pixels on your screen. The graphical rendering pipeline can be divided into several stages, each of which requires the output of the previous phase as input. All of these phases are highly specialized (they have a specific function) and they can be executed in parallel. Because of their parallel execution characteristics, most graphics cards today have thousands of small processing cores that run their own small programs on the GPU for each (render pipeline) stage to quickly process your data in a graphics rendering pipeline. These small programs are called Shaders (Shader).
Some shaders allow developers to configure themselves, and we can replace the default with the shader we write ourselves. This gives us more granular control over the specific parts of the graphics rendering pipeline because they run on the GPU, so they save valuable CPU time. OpenGL shaders are written in OpenGL shader language (OpenGL shading Language, GLSL).
Below, you will see an abstract representation of each stage of a graphical rendering pipeline. Note that the blue section represents a shader that we can customize.
As you can see, the graphical rendering pipeline contains many parts, each of which is a specific stage in the big process of turning your vertex data into the last rendered pixel. We'll give you a general explanation of each part of the rendering pipeline, giving you a general idea of how the graphics rendering pipeline works.
Vertex shader (Vertex Shader)
We pass 3 3D coordinates as an array as input to the graphical rendering pipeline, which is used to represent a triangle, which is called the vertex data (Vertex), where the vertex data is a collection of vertices. A vertex is a collection of 3D coordinates (that is, x, y, z data). The vertex data is represented by the vertex attribute (Vertex Attributes), which can contain any data we want to use, but for simplicity, let's assume that each vertex consists of only one 3D position and several color values.
The first part of a graphics rendering pipeline is the vertex shader (Vertex Shader), which takes a single vertex as input. The main purpose of the vertex shader is to convert the 3D coordinates to another 3D coordinate, while the vertex shader allows us to do some basic processing of the vertex properties.
In order for OpenGL to know exactly what our coordinates and color values are, OpenGL needs you to indicate what type of data you want it to represent. Do we want to render this data into a series of points? A series of triangles? Or is it just a long line? These hints are called basic graphics (primitives), and any invocation of a drawing command must pass the basic graphic type to OpenGL. This is a few of them: Gl_points, Gl_triangles, Gl_line_strip.
Basic Graphics assembly (Primitive Assembly) stage, which is called an entity assembly
The basic graphics assembly (Primitive Assembly) stage represents the vertex shader as input for all vertices of the base drawing (if you choose Gl_points, it is a single vertex), and all points are assembled into the shape of a particular basic shape; This example is a triangle.
Geometry shader (Geometry Shader)
The output of the basic graphics assembly phase is passed to the geometry shader (Geometry Shader). The geometry shader takes as input the collection of a series of vertices in the form of a basic graphic that creates new (or other) basic shapes by generating new vertices. example, it generates another triangle.
Subdivision shader (tessellation Shaders)
The subdivision shader (tessellation Shaders) has the ability to subdivide a given basic graph into more small basic shapes. This way we can create smoother visuals by creating more triangles when the object is closer to the player.
Rasterization (Rasterization also translated into pixelated) stage
The output of the subdivision shader goes into the rasterized (rasterization) stage, where it maps the basic graphic to the corresponding pixel on the screen and generates a fragment (Fragment) for use by the fragment shader (Fragment Shader). The crop (Clipping) is executed before the fragment shader runs. Cropping increases execution efficiency by discarding pixels that are outside your view.
Fragment shader (FRAGMENT SHADER)
The main purpose of the fragment shader is to calculate the final color of a pixel, which is where OpenGL's advanced effects are generated. Typically, a fragment shader contains some data for a 3D scene that calculates the final color of a pixel (such as light, shadow, color of light, and so on).
Alpha Test and blending (Blending) phase
After all the corresponding color values have been determined, it will eventually pass to another stage, which we call the Alpha Test and the hybrid (Blending) phase. This phase detects the corresponding depth (and stencil) values of the pixels, using these to check whether the pixel is in front of or behind another object, and so on. This phase also checks the alpha value (the alpha value is the transparency value of an object) and the blend (blend) between the objects. So, even if the color of the output of a pixel is calculated in the fragment shader, the final pixel color may be quite different when rendering multiple triangles.
Summary
As you can see, the graphics rendering pipeline is very complex and contains a lot of parts to configure. However, for most occasions, all we have to do is vertex and fragment shaders. Geometry shaders and subdivision shaders are optional, usually using the default shader.
In modern OpenGL, we must define at least one vertex shader and one fragment shader (because there is no default vertex/fragment shader in the GPU). For this reason, it is very difficult to start learning modern OpenGL because you need a lot of knowledge before you can render your first triangle.
The above contents refer to:
http://learnopengl-cn.readthedocs.org/zh/latest/01%20Getting%20started/04%20Hello%20Triangle/#pipeline
The pipeline flow can also be seen in the following illustrations.
Figure from: https://segmentfault.com/a/1190000004404224
Figure from: http://blog.csdn.net/iispring/article/details/7649628
Resources:
OpenGL ES2.0 for Android
Http://www.zhouchengjian517.com/opengl-es2.0-for-android/#tocAnchor-1-1