OpenGL 7th ~ Chapter 8
Cropping, raster, and hidden area Elimination
Raster is the process of processing objects into fragments.
Two things that must be done. First, make each set object pass through the graphic system.
Second, assign a color value to each pixel to be displayed in the color cache.
Four main tasks of the graphic rendering system
Modeling-> geometric processing...
Modeling: obtains a set of vertex data that represents the geometric object.
Geometric processing: acts on vertex data, determines which geometric objects will be displayed on the screen, and assigns the gray value and color value to the vertex of the object. After a series of transformation matrix Transformations
Fragment processing: texture values are used only in the fragment processing phase after raster,
Course 2 Synthesis Technology: Go Over Blending
In this lesson, we will first look at the synthesis technology and a rough look at how the deep cache (depth buffer) works.
Deep Cache
From the second lesson, we know that when you tell webgl to draw things, there will be a rough process,
1. Run the vertex Renderer to calculate the position of each vertex.
2. Perform linear interpolation between vertices to tell the system which segments need to be drawn.
3. Run the fragment Renderer for each piece to calculate its color.
4 Write frame Cache
Therefore, the frame cache is executed at the end. But what if I want to draw two objects? For example, if you draw a square centered on (0, 0,-5) and a square centered on (0, 0,-10, you don't want to draw a second square and overwrite the first one because it is far away and should be covered.
Webgl uses deep cache to handle this situation. When a piece of element is written into the frame cache after processing by the piece of element Renderer, the same rgba color value will also store a depth value related to the Z value of the piece.
Why is it "z value-related? Webgl wants the range of all Z values to be 0 ~ Within 1, 0 indicates the nearest (closese) 1 indicates the farthest (furthese away ). These previously were hidden by the projection matrix in the Perspective function called in the drawscene function. So far, you need to know that the greater the Z-buffer value, the farther away. This is the opposite of our normal coordinates (the farther the zcoordinate is, the smaller the zcoordinate is, the negative number ).
This is the deep cache. Now, you may remember the code we used to initialize the webgl context from the first lesson:
Gl. Enable (GL. depth_test );
This is to tell the system what to do when writing the fragment into the frame cache. That is, "Pay attention to the deep cache ". It is used together with the depth function set by another system. It usually has a default value, but if we want to set its value, we can set it like this:
Gl. depthfunc (GL. Less );
This means, "if we have a Z value smaller than the current Z value, we will use the new one to discard the current element ." These settings and the code that uses them can provide enough behavior for us.
Synthesis
Synthesis can be simply considered another choice in the above process. We use the depth function and depth-testing to determine whether to use the new element to replace the current element. If we use the synthesis method, we will use a synthesis function (blending function) to combine the color of the current element and the new element and generate a new element, then write the new element to the cache.
Now let's take a look at this code. The code is almost exactly the same as in the previous lesson. All important parts are in the short snippet of the drawscene function. First, check whether the merging check box is selected.
VaR blending = Document. getelementbyid ("blending"). checked;
If yes, set a function to combine two pieces of metadata:
If (blending ){
Gl. blendfunc (GL. src_alpha, GL. One );
The parameter of this function determines how to merge. This requires skill, but it is not difficult. In the first step, we should first determine two requirements: the source of the element is the one we are drawing, and the target element is already cached in the frame. The first parameter of Gl. blendfunc determines the source factor, and the second parameter determines the target factor. These factors are numbers used in the synthesis function. In this way, we can say that the source factor is the Alpha value of the source element, and the target factor is a constant of 1. Of course there are other possibilities. For example, if you use src_color to determine the source color, the rgba value of the source factor is the same as that of the initial rgba component. % >_< % -_-! -_-| = _ = -_-#
Assume that webgl tries to calculate the color (Rd, Gd, BD, AD) of a piece element, and the rgba value of the new piece element is (Rd, Gd, BD, AD ); the rgbaz value of the source factor is (Sr, SG, Sb, SA), and The rgba of the target factor is (DR, DG, DB, DA ).
For each color component, the system performs the following operations:
Rresult = RS * Sr + RD * Dr
Gresult = GS * SG + GD * DG
Bresult = BS * Sb + bd * DB
Aresult = as * Sa + Ad * da
Therefore, in this case, we can get (only the red component is given ):
Rresult = RS * As + Rd
This is not a good way to get transparent objects, but it works well in the case of lighting. This is worth emphasizing. Synthesis and transparency are different. They are only one of the many methods that can achieve transparency.
Okay. Now proceed:
Gl. Enable (GL. Blend );
Like many things in webgl, merging is disabled by default, so we need to open it first.
Gl. Disable (GL. depth_test );
This is even more interesting. We must turn off the deep test first. What if I don't turn it off? For example, if we draw a cube, if we draw the back first and then draw the front, the front will be merged on the back. This is no problem, but if we draw the front first, the back is ignored due to deep testing before merging because we are far away from us.
Readers may notice that merging relies heavily on the order of drawing, which we did not pay attention to before. We will finish the remaining code later:
Gl. uniform1f (shaderprogram. alphauniform, parsefloat (document. getelementbyid ("Alpha"). Value ));
Here we get an Alpha value from JS and import it into the Renderer. This is because the texture we use does not have its own alpha channel (it only has RGB, so the Alpha value of each pixel is 1 ).
The rest of drawscene is only the necessary code to be processed after merging is disabled.
} Else {
Gl. Disable (GL. Blend );
Gl. Enable (GL. depth_test );
}
The piece meta Renderer Code also has some small modifications to facilitate the use of alpha values.
# Ifdef gl_es
Precision highp float;
# Endif
Varying vec2 vtexturecoord;
Varying vec3 vlightweighting;
Uniform float ualpha;
Uniform sampler2d usampler;
Void main (void ){
Vec4 texturecolor = texture2d (usampler, vec2 (vtexturecoord. S, vtexturecoord. t ));
Gl_fragcolor = vec4 (texturecolor. RGB * vlightweighting, texturecolor. A * ualpha );
}
The above is the change in the Code.
Now let's look back at the order of drawing. In this example, we can get a very good transparency effect. It looks like a church glass. Now try again and change the light direction so that the direction is directed from the positive side of the Z axis-just remove "-". It looks good, but the sense of reality is completely gone.
The reason is that under the initial illumination conditions, the back of the cube is always blurred, that is, the RGB value on the back is very small, so the equation is as follows:
Rresult = RS * RA + Rd
Is not that strong. That is to say, we just got the light that made the back visible lower. If we turn the light to the front, the visibility of the front will be reduced, and the transparency will not be so good.
How can this problem be solved? OpenGL FAQ (FAQ): You need to use a source factor of src_alpha and a target factor of one_minus_src_alpha. However, due to the different processing methods of source and target elements, there is still a dependency on the drawing sequence.
Therefore, merging to process transparency requires skill, but if you can effectively control the scenario, just like controlling the light source direction in this lesson, you can use this function as appropriate, of course, you need to pay attention to the drawing sequence.
Fortunately, merging is useful in other aspects and you will see it in the next lesson.