Objective
In the first 2 articles, we all talked about shaders, and in the second it was formally said that this shader can only be used in programmable pipelines such as OpenGL es2.x, but not in OpenGL es1.x. But we haven't been saying why, what's the difference between the two. Well, let's get this one together. Learn the rendering pipeline in OpenGL ES.
Body
Pipeline , the English name pipeline, believe that the use of Facebook pictures loaded library of students to this pipeline is not unfamiliar, because simpleimagedrawee inside is also used in the pipeline to the image of a processing. Because the bottom is also C, I can boldly guess that the Facebook image loading Library's design approach may have reference to OpenGL (which of course is purely hypothetical ^_^).
Piping is described in the correct computer language:
The process of data transfer processing calculations performed by the graphics card, from geometry to the final rendered image.
That is the pipeline, it has to be in sequence. The overall flow is from upstream to downstream.
In OpenGL es1.x, it is a fixed pipe, the whole is closed, the middle of each process in a fixed sequence of processes to walk. See:
As can be seen, these process sequences are fixed. The whole process can be divided into three parts: processing vertices, processing slices, validating slice meta-information and depositing into memory.
Rasterizer: Rasterization, when the vertex processing is finished, will be handed to Rasterizer for rasterization, the result will be the vertex coordinate information to be able to display on the screen pixel information (fragments). After generating the elements, the next step is to fragments the various verification, that is, to filter out the useless pieces, cut off the film is not in the field of view, and finally the effective elements stored in memory.
Here for Rasterizer Rasterization, let's take a look at learning:
Rasterizer/rasterization: rasterization Processing
The word Adobe's official translation is rasterized or pixelated. Yes, it's the process of converting vector graphics into pixels. The images we display on the screen are made up of pixels, and the three-dimensional objects are made up of point-and-line surfaces. You need to rasterize this process to make the point line face a pixel that can be displayed on the screen. Is the description of the point line surface of the vector, which becomes the description of the pixel. (or: The slice where the vertex is converted from a world coordinate system to a screen coordinate system)
For example, this is a magnified 1200% screen, the front is to tell the computer I have a circle, followed by the computer to convert the circle to display the pixel points. This process is rasterize.
Now is a pluralistic society, is a personalized society, everything is thinking of personalization, OpenGL es is no exception, it provides the interface for personalized needs. One of the blue squares is a place that can be highly customizable, thus creating a programmable pipeline such as OpenGL es2.x, with two dedicated words vertexshader (vertex shader), Fragmentshader in OpenGL es ( Slice shader), corresponding to blue blocks such as the coordinate blue block and texture in figure one respectively.
Here's a look at the OpenGL ES2.0 rendering pipeline diagram:
VertexShader: Vertex shader
vertex shader , remember in the first 2 articles, we have a script statement that posts 2 shaders, again posted as follows:
/**
* Vertex shader statements
* /
private final String mVertexShader =
"uniform mat4 uMVPMatrix; \ n" +
"attribute vec4 aPosition; \ n" +
"attribute vec2 aTextureCoord; \ n" +
"varying vec2 vTextureCoord; \ n" +
"void main () {\ n" +
"gl_Position = uMVPMatrix * aPosition; \ n" +
"vTextureCoord = aTextureCoord; \ n" +
"} \ n";
/**
* Fragment shader statements
* /
private final String mFragmentShader =
"precision mediump float; \ n" +
"varying vec2 vTextureCoord; \ n" +
"uniform sampler2D sTexture; \ n" +
"void main () {\ n" +
"gl_FragColor = texture2D (sTexture, vTextureCoord); \ n" +
"} \ n";
Here's a look at the keywords in the vertexshader statement:
attribute : Encapsulates data for each vertex using a vertex array, typically for variables with different vertices, such as vertex position, color, etc.
Uniform : The constant data used by the vertex shader, which cannot be modified by the shader, is typically used for variables of the same set of vertices in a single 3D object, such as the position of the current light source.
Sampler: This is optional, a special uniform that represents the texture used by the vertex shader.
mat4: Represents a 4x4 floating-point matrix that stores the combined model view and the projection matrix
VEC4: Represents a vector containing 4 floating-point numbers
varying is an output variable that is used to pass from the vertex shader to the slice shader or fragmentshader to the next
Umvpmatrix * aposition: After transforming the position through the 4*4 transformation matrix, output to gl_position. Gl_position is the output variable built into the vertex shader. Gl_fragcolor: is the output variable built into the slice shader.
Primitiveassembly: Entity Assembly
Primitives are graphs, and in OpenGL there are several primitive primitives: points, lines, triangles, and other complex elements are based on these basic Tu Yuanlai.
In an element assembly is not a stage, those data (Vertexarrays/buff Objects) of vertex arrays or buffers processed by the vertex shader (vertexshader) are assembled into separate geometries (eg: dots, lines, triangles, etc.).
For each element that is assembled, it must be ensured that it is in the world coordinate system (that is, the visible area of the screen), and for entities that are not in the world coordinate system, it must be clipped so that it is in the world coordinate system to flow to the next operation (Rasterization).
Note Here there is also a culling operation (cull), if the switch is open for this function:gles20.glenable (Gles20.gl_cull_face), excluding the figure of the background, shadow, back and so on.
Fragmentshader: Chip Shader
The element shader is mainly processed by the slice elements generated after the grating processing. Receives the value of the vertex shader output, the data that needs to be passed in, and where the output values are stored after the transformation matrix can be passed at a glance:
Gl_fragcolor: is the output variable built into the slice shader.
Because the rasterization raster processing, the entity is only in the screen with pixels, but has not been color processing, or can not see things.
So fragmentshader can be understood as: Tell the computer how to paint-how to deal with light, shadows, occlusion, environment and so on.
Per-fragment Operations: Chip-by-element operation phase
After the element shader is processed in a comprehensive way and finally yuan Sheng into a color value and stored in the Gl_fragcolor variable, the next step is to perform a series of tests on the elements individually. In the above we say that, in rasterization, it transforms vertices from the world coordinate system to the screen coordinate system, so after the raster processing, each slice has a coordinate (XW,YW) on the screen. and stored in the frame buffer (FrameBuffer), including the element shader is also pair (XW,YW) The elements of this coordinate are processed.
Shows the process of per-fragment operations:
Pixel Ownershiptest: A pixel ownership test that determines whether a pixel in the framebuffer (Xw, Yw) position is in the context of the current OpenGL ES, such as: if an OpenGL The ES frame buffer window is obscured by other windows, and the window system will decide that the obscured pixels are not part of the current OpenGL ES context and therefore will not be displayed.
Scissortest: The crop test determines whether a slice of a position (Xw, Yw) is in the clipping rectangle, and if not, is discarded.
stencil test/depth Tests: Template and depth test, pass in the template and depth value of the slice, decide whether to discard the slice element.
Blending: Fragmentshader The newly generated slice color value and the color value of the slice stored in a location in Framebuffer (Xw, Yw).
dithering: For systems with less available color, you can increase the number of available colors by dithering the color values at the expense of resolution. Jitter is hardware-related, and OpenGL allows the programmer to turn jitter on or off only. In fact, if the resolution of the machine is already quite high, there is no point in activating the jitter operation at all. To activate or suppress jitter, you can use the glenable (gl_dither) and gldisable (Gl_dither) functions. By default, Jitter is active.
Resources
Android OpenGL ES Development Tutorial (3): OpenGL ES Pipeline (Pipeline)
How do I understand the concepts of shaders, rendering pipelines, rasterization, etc. in OpenGL?
OpenGL ES 2.0 Rendering pipeline
Android OpenGL ES 0 Basic Series (iii): OpenGL es rendering Pipeline and VertexShader and Fragmentshader