[OpenGL ES 07-2] per-vertex light and deep Cache

Source: Internet
Author: User
Tags response code
ArticleDirectory
    • I. Preparations
    • Ii. Per-vertex light implementation
    • 3. Set illumination
    • Iv. depth buffer-deep Cache

[OpenGL ES 07-2] per-vertex light and deep Cache

Luo chaohui (http://www.cnblogs.com/kesalin)

This article follows the "signature-non-commercial use-consistency" creation public agreement

 

This is the eighth article in OpenGL ES tutorial. For the first seven articles, see the following link:

[OpenGL ES 01] early experience of OpenGL ES on iOS
[OpenGL ES 02] OpenGL ES rendering pipeline and shader
[OpenGL ES 03] 3D Transformation: model, view, projection, and viewport
[OpenGL ES 04] 3D Transformation Practices: translation, rotation, scaling
[OpenGL ES 05] relative space transformation and color
[OpenGL ES 06] Using VBO: vertex Cache
[OpenGL ES 07-1] illumination Principle

 

Preface

In the previous article [OpenGL ES 07-1] illumination principle, we have introduced the illumination principle in OpenGL. Next we will demonstrate how to use OpenGL ES 2.0 to implement these principles. This article will introduce per-vertex light and deep cache. The next article will introduce per-pixel light and cartoon effects. Remember to leave a small job at the end of the sixth article and use the vertex cache to depictCubeBody cube? It will be used in this article. The sample source code of per-vertex light is as follows:

 

Before you start, let's take a look at how per-vertex light is going. Per-vertex light is also called gauroud coloring. It calculates the color of each vertex in the vertex coloring phase, and then uses these vertex colors for linear interpolation in the grating phase to form the color of the slice element.

 

I. Preparations

1. Create a project

As in the previous article, create a single view application named tutorial07 and import opengles. Framework and quartzcore. framework. Copy the utils, shader, and surface directories and openglview. h/m Files in tutorial06 to tutorial07 and add them to xcode.

2. Add a cube-type VBO.

The Sixth Small job is the part of the VBO of the cube.Code-(Drawablevbo *) createvbosforcube is added to openglview. m, and the-(void) setupvbos implementation is modified. Add the cube type to _ vboarray. This part of the code is irrelevant to the topic of this article, so I will not describe it here. For more information, see the source code.

3. Add control controls

For details, add related controls to the storyboard:

4. Add response code

As in the preceding example, add the relevant response code to viewcontroller and associate it with the corresponding control in storyboard using the drag-and-drop technique. Only a part of the code is listed below. For the complete code, see the source code.

 
@ Property (nonatomic, strong) iboutlet openglview * openglview; @ property (nonatomic, strong) iboutlet uislider * lightxslider ;//... -(ibaction) lightxslidervaluechanged :( ID) sender ;//... -(ibaction) segmentselectionchanged :( ID) sender;

 

Ii. Per-vertex light implementation

1. Modify the vertex shader.

In this article, the actual work of illumination calculation is carried out in the vertex coloring machine. Therefore, first modify the vertex coloring machine vertexshader. glsl as follows:

Uniform mat4 projection; Uniform mat4 modelview; Attribute vec4 vposition; Uniform mat3 normalmatrix; Uniform vec3 vlightposition; Uniform vec4 region; Uniform vec4 region; Uniform float shininess; Attribute vec3 vnormal; Attribute vec4 Region; varying vec4 vdestinationcolor; void main (void) {gl_position = projection * modelview * vposition; vec3 n = normalmatrix * vnormal; vec3 L = normalize (vlightposition); vec3 E = vec3 (0, 0, 1); vec3 H = normalize (L + E); float df = max (0.0, dot (n, l); float Sf = max (0.0, DOT (n, H); Sf = POW (SF, shininess); vdestinationcolor = vambientmaterial + DF * vdiffusematerial + SF * vspecularmaterial; // vdestinationcolor = vec4 (1.0, 0.0, 0.0, 1.0 );}

Here, three types of material attributes are added: Environment material vambientmaterial, diffuse reflection material vdiffusematerial, and mirror reflection material vspecularmaterial. These are all introduced in the previous article "Lighting principles", and we will not repeat them here. The final color of illumination calculation is equal to the sum of three types of illumination effects:

Vdestinationcolor = vambientmaterial + DF * vdiffusematerial + SF * vspecularmaterial;

DF diffuse reflection factor: it is the dot product of the light and the vertex normal vector. It is the COs value of the angle between the light l and the normal n;

SF mirror reflection factor: it is the point product of the split line h formed by line of sight e and the line of light l and the vertex normal vector (geometric meaning is the COs value of the angle between the split line h and the vertex normal N ), then shininess power;

Bisector vector H: It is calculated by adding the line of sight vector e and the light vector L, and normalization;

Shininess gloss strength: it consists of OpenGLProgramInput. Of course, in the end, the UI is controlled by the shininess slider. The smaller the value, the larger the gloss intensity.

As mentioned above, many computations in OpenGL require vector planning, which is embodied here, such as normal, split line, and positional vector. In the above Code, we also introduced the normal transformation matrix normalmatrix from the OpenGL program, which is worth mentioning.

2. Normal Transformation Matrix

Why do we need a normal transformation matrix? Because the normal vector is the same as the vertex vector, it is in the object's model space, while the illumination calculation is usually carried out in the view space, therefore, we need to transform the normal vector in the model space to the view space, which is one of the reasons. You may say that this transformation can be used directly with the model view transformation matrix modelview used for vertex transformation. It is true that when the model view transformation is a rigid-body transformation, the normal transformation matrix is exactly the same as the model view transformation matrix. However, if the model view transformation is not a rigid-body transformation, the two are different. The so-called rigid body transformation means that the object is scaled proportionally in the X, Y, and Z directions. Only in the case of rigid body transformation, the direction of the normal vector of the vertex will not change, but in the case of non-rigid body transformation, the direction of the normal vector will change. Imagine that the normal vector of the vertices on the ball is directed from the ball center to the vertex. when the ball is scaled in the Y direction and the X and Z directions remain unchanged, after such non-rigid body transformation, some of the normal vectors of vertices on the flattened rugby are no longer directed from the ball center to the vertices. The formula for calculating the normal transformation matrix of non-rigid body transformation is as follows:

Normal Transformation Matrix of non-rigid body transformation = inverse matrix of Model View Transformation Matrix

This calculation process is divided into two steps: first, inverse the transformation matrix of the model view, and then inverted (that is, switching row and column elements ). The inverted matrix of the inverse matrix of the model view transformation matrix for rigid body transformation is equal to the model view transformation matrix itself. Therefore, when performing a rigid body transformation (in this example), you only need, set the value of the normal transformation matrix to the value of the model view transformation matrix.

 

3. Set illumination

1. Access the vertex shader variable

Like tutorial 6, you need to add the slot variables used to access the variables in the vertex shader in openglview and the variables used to set the illumination parameters. In openglview. H, add the following variables:

Gluint _ positionslot; gluint _ modelviewslot; gluint _ projectionslot; gluint _ blocks; glint _ diffuseslot; glint _ specularslot; glint _ Blocks; ksmatrix4 _ modelviewmatrix; ksmatrix4 _ projectionmatrix; ksvec3 _ lightposition; kscolor _ ambient; kscolor _ diffuse; kscolor _ specular; glfloat _ shininess;

Because we need to control some lighting parameters on the UI, We need to declare some variables above as attributes to facilitate UI update, these lighting parameters need to be re-painted after the update to see the effect immediately:

// Openglview. h // @ property (nonatomic, assign) ksvec3 lightposition; @ property (nonatomic, assign) kscolor ambient; @ property (nonatomic, assign) kscolor diffuse; @ property (nonatomic, assign) kscolor specular; @ property (nonatomic, assign) glfloat shininess; // openglview. M // @ synthesize lightposition = _ lightposition; @ synthesize ambient = _ ambient; @ synthesize diffuse = _ diffuse; @ synthesize specular = _ specular; @ synthesize shininess = _ shininess; # pragma mark properties-(void) setambient :( kscolor) Ambient {_ ambient = ambient; [self render];}-(void) setspecular :( kscolor) specular {_ specular = specular; [self render];}-(void) setlightposition :( ksvec3) lightposition {_ lightposition = lightposition; [self render];}-(void) setdiffuse :( kscolor) diffuse {_ diffuse = diffuse; [self render];}-(void) setshininess :( glfloat) shininess {_ shininess = shininess; [self render];}

2. Access slot

Add the getslotsfromprogram method to openglview. M and call it at the end of the setupprogram method.

 
-(Void) Configure {// get the attribute and uniform slot from program // _ projectionslot = glgetuniformlocation (_ programhandle, "projection"); _ modelviewslot = glgetuniformlocation (_ programhandle, "modelview"); _ callback = glgetuniformlocation (_ programhandle, "normalmatrix"); _ lightpositionslot = callback (_ programhandle, "vlightposition"); _ ambientslot = callback (_ programhandle, "Rule"); _ specularslot = glgetuniformlocation (_ programhandle, "rule"); _ shininessslot = glgetuniformlocation (_ programhandle, "shininess"); _ positionslot = Condition, "vposition"); _ normalslot = glgetattriblocation (_ programhandle, "vnormal"); _ diffuseslot = glgetattriblocation (_ programhandle, "vdiffusematerial ");}

3. initialize and update illumination parameters

Add the setuplights and updatelights methods to openglview. m to initialize the illumination parameters and update the illumination parameters to the vertex shader:

 
-(Void) setuplights {// initialize various state. // glenablevertexattribarray (_ positionslot); glenablevertexattribarray (_ normalslot); // set up some default material parameters. // _ lightposition. X = _ lightposition. y = _ lightposition. z = 1.0; _ ambient. R = _ ambient. G = _ ambient. B = 0.04; _ specular. R = _ specular. G = _ specular. B = 0.5; _ diffuse. R = 0.0; _ diffuse. G = 0.5; _ diffuse. B = 1.0; _ shininess = 10;}-(void) updatelights {gluniform3f (_ lightpositionslot, _ lightposition. x, _ lightposition. y, _ lightposition. z); gluniform4f (_ ambientslot, _ ambient. r, _ ambient. g, _ ambient. b, _ ambient. a); gluniform4f (_ specularslot, _ specular. r, _ specular. g, _ specular. b, _ specular. a); glvertexattrib4f (_ diffuseslot, _ diffuse. r, _ diffuse. g, _ diffuse. b, _ diffuse. a); gluniform1f (_ shininessslot, _ shininess );}

The setuplights method is called after setprojection in-(ID) initwithcoder :( nscoder *) adecoder, while updatelights is called at the end of updatesurface, this method is called during each rendering (in fact, you only need to call it when there is a change to the illumination parameter, and here the optimization is not done ).

As mentioned above, we need to use the normal transformation matrix in the vertex coloring tool to transform the normal to the view space. Therefore, we also need to set the normal transformation matrix in the program. Here is a rigid body transformation, so you only need to assign the value of the model transformation matrix to the normal transformation matrix. This assignment is carried out in the updatesurface method. The complete code of updatesurface is as follows:

-(Void) updatesurface {ksmatrixloadidentity (& _ modelviewmatrix); kstranslate (& _ modelviewmatrix, 0.0, 0.0,-8); ksmatrixmultiply (& _ modelviewmatrix, & _ rotationmatrix, & _ modelviewmatrix); // load the Model-View matrix gluniformmatrix4fv (_ modelviewslot, 1, gl_false, (glfloat *) & _ modelviewmatrix. M [0] [0]); // load the normal matrix. // It's orthogonal, so its inverse-transpose is itself! // Ksmatrix3 normalmatrix3; Evaluate (& normalmatrix3, & _ modelviewmatrix); Evaluate (_ normalmatrixslot, 1, gl_false, (glfloat *) & normalmatrix3.m [0] [0]); [self updatelights];}

4. Now, the lighting settings are complete. If there are no errors, the compilation should be able to run. The effect is as follows:

In, we can see the effect of light. However, this effect is too good! Why is there a shadow that shouldn't appear? What shadows are? Let's take a look at the next section!

 

Iv. depth buffer-deep Cache

1. Shadow problem analysis above

From the figure above, we can see that a part of the surface of the ball seems to be covered by some shadows. That's right, it does. Where did these shadows come from? They are also part of the sphere, but belong to the back hemisphere of another hemisphere. In real life, when we see a ball, we can only see the front hemisphere of a ball, and the back hemisphere is blocked. However, when rendering a ball in OpenGL, both the front hemisphere and the back hemisphere will be rendered, so that the X on the front and back hemisphere will appear, if y is the same, only two vertices with different Z values will be depicted on the same pixel on the screen. The first rendered vertices will be overwritten by the rendered vertices. Therefore, if the vertices on the front hemisphere (the one with a smaller Z value) are rendered first, then the vertices on the back hemisphere (the one with a larger Z value) are rendered, it will overwrite the vertices on the front hemisphere, so the above situation occurs. The solution to this problem is to compare the Z value before rendering, without rendering the point with a large Z value. In OpenGL, this is implemented through depth test. It is called Deep test because it compares the Z-value-depth-from the screen to the inside. Do you still remember to render the pipeline flowchart in tutorial 02? Deep testing is performed after template testing and before blending.

2. depth buffer Introduction

To compare the Z value (deep test), OpenGL needs a place to save the Z value, which is depth buffer. Depth buffer and stencel buffer, color buffer and OpenGL three major caches. Depth buffer and stencel buffer are also render buffer, but different from color buffer (rgba quad-tuples), they all have only one composition element. depth buffer only needs to save the Z value, however, the stencel buffer (template cache) only needs to save a value for template testing (which will be introduced later ).

3. Use depth buffer

Since depth buffer is also a render buffer, its creation and deletion are similar to the color buffer introduced in tutorial 01:

// Openglview. h // gluint _ depthrenderbuffer; // openglview. M /// create a depth buffer that has the same size as the color buffer.int width, height; fill (gl_renderbuffer, gl_renderbuffer_width, & width); fill (gl_renderbuffer, gl_renderbuffer_height, & height); glgenrenderbuffers (1, & _ Queue); glbindrenderbuffer (gl_renderbuffer, _ Queue); glrenderbufferstorage (gl_renderbuffer, queue, width, height); terminate (gl_framebuffer, queue, gl_renderbuffer, _ depthrenderbuffer );

The gldeleterenderbuffers method is also used for deletion.

By default, OpenGL does not enable deep testing. Therefore, you need to explicitly call gl_depth_test to enable it. Add this sentence at the end of the setupprojection method.

4. Compile and run. Aha, everything is OK!

In this example, there are many slide used to control various lighting parameters (environmental material, diffuse reflection material, mirror material and gloss intensity), and there are two models available for switching. You may wish to slide more and experience the effects of different parameters to deepen your understanding of OpenGL lighting.

 

V. Conclusion

With the theoretical basis of the previous article "Illumination principle", it is very easy to practice the per-vextex illumination today. Next, we will introduce the per-pixel illumination and cartoon effects. For urgent kids shoes, you can first browse the source code to see if you can understand it. BTW, the tutorial code is already very advanced-writing to tutorial 13 is far behind writing articles. It takes a lot of time and effort to write articles. I have a deep understanding of the hard work of writing a book in China, but the return is often far from being paid. Well, I need to support more genuine books.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.