[OpenGL ES 05] relative space transformation and color

Source: Internet
Author: User
ArticleDirectory
    • 2. Use color
    • 3. Smooth coloring
    • 4. Remove the backend
    • V. Summary
    • Vi. References

[OpenGL ES 05] relative space transformation and color

Luo chaohui (http://www.cnblogs.com/kesalin)

This article follows the "signature-non-commercial use-consistency" creation public agreement

 

This is the fifth article in OpenGL ES tutorial. For the first four articles, see the following link:

[OpenGL ES 01] early experience of OpenGL ES on iOS
[OpenGL ES 02] OpenGL ES rendering pipeline and shader
[OpenGL ES 03] 3D Transformation: model, view, projection, and viewport
[OpenGL ES 04] 3D Transformation Practices: translation, rotation, scaling

 

Preface

I have already spent two articles on 3D transformation, but 3d transformation is not exactly what we can talk about in the two articles in the partition area. In order to make the topic more comprehensive, in this color article, we will talk about the relativeSpaceChange the topic that has not been mentioned. RelativeSpaceA transformation is similar to a transformation from a local coordinate to a world coordinate, that is, a transformation between a coordinate space. You can imagine an action like this: Lift your arm and bend it at the same time. The arm is equivalent to rotating in the coordinate space of the arm, and the space of the arm is equivalent to rotating in the coordinate space of the body or arm. As if the space is nested, a layer is nested, and the outer layer will affect the inner layer. The transformation of the inner layer is relative to the outer layer. It also describes how to remove the color and the back of OpenGL ES.

Today, we will show an example of relative motion and color. The running effect of the example is shown in. The source code is here: click here to view

 

1. Relative Transformation

1. Create a project. This article is based on [OpenGL ES 04] 3D Transformation Practices: translation, rotation, and scaling. If you have not read this article, see how to create a project where uiview and OpenGL view coexist, and how to manipulate objects in the 3D world through the UI control. Create two uiviews, two uislider, and one button with reference to the previous article, and associate them with the following attributes or methods in uiviewcontroller. h:

@ Property (nonatomic, strong) iboutlet openglview * openglview;-(ibaction) onshoulderslidervaluechanged :( ID) sender;-(ibaction) identifier :( ID) sender;-(ibaction) onrotatebuttonclick :( ID) sender;

2. Remember to clear openglview resources in uiviewcontroller. M:

 
-(Void) viewdidunload {[Super viewdidunload]; [self. openglview cleanup]; self. openglview = nil ;}

3. Implement the relevant event response methods in uiviewcontroller. M. Let's implement the "test driver" today, first write the use of the interface, and then implement the interface:

-(void) onshoulderslidervaluechanged :( ID) sender {uislider * slider = (uislider *) sender; float currentvalue = [slider value]; nslog (@ "> current shoulder is % F", currentvalue); self. openglview. rotateshoulder = currentvalue;}-(void) onelbowslidervaluechanged :( ID) sender {uislider * slider = (uislider *) sender; float currentvalue = [slider value]; nslog (@ "> current elbow is % F", currentvalue); self. openglview. rotateelbow = currentvalue;}-(ibaction) onrotatebuttonclick :( ID) sender {[self. openglview toggledisplaylink]; uibutton * button = (uibutton *) sender; nsstring * text = button. titlelabel. text; If ([text isequaltostring: @ "Rotate"]) {[Button settitle: @ "stop" forstate: uicontrolstatenormal];} else {[Button settitle: @ "Rotate" forstate: uicontrolstatenormal] ;}

4. from aboveCodeIt can be seen that we need two public attributes: rotateshoulder and rotateelbow, respectively, the Code arm and the rotation of the arm around the X axis, And then we need a public method: toggledisplaylink, used to control the rotation of colored cubes. Therefore, we need three rotating volumes, two public ones, and one private one. Declare two public variables in openglview. h,

 
@ Property (nonatomic, assign) float rotateshoulder; @ property (nonatomic, assign) float rotateelbow;-(void) render;-(void) cleanup;-(void) toggledisplaylink;

Then declare the following variables in openglview. H's anonymous category:

 
@ Interface openglview () {ksmatrix4 _ shouldmodelviewmatrix; ksmatrix4 _ elbowmodelviewmatrix; float _ rotatecolorcube; cadisplaylink * _ displaylink ;}

And method:

 
-(Void) updateshouldertransform;-(void) updateelbowtransform;-(void) resettransform;-(void) transform;-(void) drawcolorcube;-(void) drawcube :( ksvec4) color;

The following describes the code. To achieve the effect of relative motion, we need to retain the Model View matrix of each model. _ shouldmodelviewmatrix corresponds to the model view matrix of the arm, the _ elbowmodelviewmatrix corresponds to the model view matrix of the arm. The latter is based on the former. What does this mean? Do you still remember the sequence of model transformation, view transformation, and projection transformation described earlier? Here, we can also say that the objects in the space where the arm is located are first converted to the space where the arm is located, and then the space where the arm is located (including the objects transformed from the arm space) the arm model view is transformed to the view space. Updateshouldertransform and updateelbowtransform are used to update the Model View matrix of the arm and arm respectively. Let's take a look at their implementation below:

-(Void) updateshouldertransform {ksmatrixloadidentity (& _ shouldmodelviewmatrix); kstranslate (& _ shouldmodelviewmatrix,-0.0, 0.0,-5.5 ); // rotate the shoulder // ksrotate (& _ shouldmodelviewmatrix, self. rotateshoulder, 0.0, 0.0, 1.0); // scale the cube to be a shoulder // kscopymatrix4 (& _ modelviewmatrix, & _ shouldmodelviewmatrix); ksscale (& _ modelviewmatrix, 1.5, 0.6, 0.6); // load the Model-View matrix gluniformmatrix4fv (_ modelviewslot, 1, gl_false, (glfloat *) & _ modelviewmatrix. M [0] [0]);}-(void) updateelbowtransform {// relative to shoulder // kscopymatrix4 (& _ elbowmodelviewmatrix, & _ shouldmodelviewmatrix ); // translate away from shoulder // kstranslate (& _ elbowmodelviewmatrix, 1.5, 0.0, 0.0); // rotate the elbow // ksrotate (& _ elbowmodelviewmatrix, self. rotateelbow, 0.0, 0.0, 1.0); // scale the cube to be a elbow kscopymatrix4 (& _ modelviewmatrix, & _ elbowmodelviewmatrix); ksscale (& _ modelviewmatrix, 1.0, 0.4, 0.4); // load the Model-View matrix gluniformmatrix4fv (_ modelviewslot, 1, gl_false, (glfloat *) & _ modelviewmatrix. M [0] [0]);}

Do you still remember that, as mentioned in the third article, understanding 3D transformations is an order, but is it a reverse order when writing code? The same is true here. We first update the Model View transformation of the arm, and then update the model transformation of the arm, that is, first rotating the arm, and then rotating the arm. Here you may be surprised by the two scales, because I am lazy-use the same drawcube method to depict the arm and arm. The length, width, and height of the cube are 1, to make the size of the arm and arm different, you need to zoom in and out, which is performed in the local space of the arm, without affecting other "sub" spaces. Therefore, the scaling action used to depict the arm should not affect the arm space. Therefore, this scaling is not saved to _ shouldmodelviewmatrix. You can also draw different sizes of arm and arm separately, so you do not need to zoom in or out, maybe this is better understood.

When updating the Model View matrix of an arm, we perform the transformation based on the model view matrix of the arm, that is, the model of the arm is rotated in this column) it will also affect the arm, which is achieved through the statement kscopymatrix4 (& _ elbowmodelviewmatrix, & _ shouldmodelviewmatrix. Then we need to shift the arm to the tail end of the arm. We have previously said that we have scaled up 1.5 times in the X direction of the arm cube, and the length, width, and height of the cube are 1, therefore, the actual arm width is 1.5, so we need to shift 1.5 units to the right. Then, the arm itself is rotated. This rotation is performed in the "child" Space of the arm space and will not affect the "parent space" of the arm. The final scaling is performed to depict the appropriate arm size in the local space.

Here we also use the kscopymatrix4 mathematical tool function, which declares and defines in glesmath:

 
Void kscopymatrix4 (ksmatrix4 * result, const ksmatrix4 * Target) {memcpy (result, target, sizeof (ksmatrix4 ));}

5. Implementation of drawcube is as follows:

-(Void) drawcube :( ksvec4) color {glfloat vertices [] = {0.0f,-0.5f, 0.5f, 0.0f, 0.5f, 0.5f, 1.0f, 0.5f, 0.5f, 1.0f,-0.5f, 0.5f, 1.0f,-0.5f,-0.5f, 1.0f, 0.5f,-0.5f, 0.0f, 0.5f,-0.5f, 0.0f,-0.5f,-0.5f ,}; glubyte indices [] = {0, 1, 1, 2, 2, 3, 3, 0, 4, 5, 5, 6, 6, 7, 7, 4, 0, 7, 1, 6, 2, 5, 3, 4}; glvertexattrib4f (_ colorslot, color [0], color [1], color [2], color [3]); Evaluate (_ positionslot, 3, gl_float, gl_false, 0, vertices); glablevertexattribarray (_ positionslot); gldrawelements (gl_lines, sizeof (indices) /sizeof (glubyte), gl_unsigned_byte, indices );}

Note that the rotation point of the cube is not in the center, but in the place where X is 0. In this case, the arm and arm are rotated around the left edge. Can you understand how this is done? The above Code also shows the color-related code: glvertexattrib4f (_ colorslot, color [0], color [1], color [2], color [3]);, the following is a detailed introduction.

2. Use color

1. Color Description

The color in OpenGL generally adopts the common rgba mode. These four letters indicate the primary colors of red, green, and blue, and Alpha, which can be roughly understood as transparency. In OpenGL, their value ranges from 0 to 1.0 ). Different Proportions of the combination of red, green, and blue can form a colorful color. The reason why we can see colorful colors is that, under the Action of light, the light (photon) on the surface of an object is reflected into our eyes at different frequencies (energy) the photon stimulates the photosensitive cells on the retina to form the vision. Because the photosensitive cells in human eyes are most sensitive to red, green, and blue, these three colors are used as primary colors in graphics to generate other colors. Among the three colors, human eyes are the most sensitive to green, so in some color formats, the green part has a higher weight than other parts, such as the 16-bit color format rgb565, green occupies 6 places. In OpenGLProgramDuring execution, we can set the color of the geometric element vertex to determine the color of the geometric element. This color may be the value specified for vertex display (as shown in the following example ), it may also be the Interaction Effect Between the transformation matrix and the surface normal and other material attributes After illumination is enabled (as described later ).

2. Modify the shader.

To use color in OpenGL ES 2.0, we need to modify the color and pass in the Custom color. Modify fragmentshader. glsl as follows:

 
Precision mediump float; varying vec4 vdestinationcolor; void main () {gl_fragcolor = vdestinationcolor ;}

Modify vertexshader. glsl as follows:

Uniform mat4 projection; Uniform mat4 modelview; Attribute vec4 vposition; Attribute vec4 vsourcecolor; varying vec4 vdestinationcolor; void main (void) {gl_position = projection * modelview * vposition; vdestinationcolor = custom ;}

The Section [OpenGL ES 02] OpenGL ES rendering pipeline and shader has a detailed introduction to coloring. I will not talk about it here. We pass in vsourcecolor to the vertex coloring machine in the program. This variable is assigned to the output variable vdestinationcolor of the vertex coloring machine, and vdestinationcolor is used as the input of the chip coloring machine, finally, the built-in variable gl_fragcolor is assigned to the film element coloring machine to realize color assignment.

3. assign values in the program

First, like using vposition, we find the slot of vdestinationcolor in the setupprogram method:

 
// Get the attribute position slot from program // _ positionslot = glgetattriblocation (_ programhandle, "vposition "); // get the attribute color slot from program // _ colorslot = glgetattriblocation (_ programhandle, "vsourcecolor ");

Then, you can use this slot to set the color of the model when describing the model:

Glvertexattrib4f (_ colorslot, color [0], color [1], color [2], color [3]);

4. depict the arm and arm

Well, let's integrate the above explanation to see how the results have been achieved so far (remember to leave the unimplemented method empty ):

 
-(Void) render {ksvec4 colorred = {1, 0, 0, 1}; ksvec4 colorwhite = {1, 1, 1, 1}; glclearcolor (0.0, 1.0, 0.0, 1.0); glclear (gl_color_buffer_bit); // setup viewport // glviewport (0, 0, self. frame. size. width, self. frame. size. height); // draw shoulder // [self updateshouldertransform]; [self drawcube: colorred]; // draw elbow // [self updateelbowtransform]; [self drawcube: colorwhite]; [_ context presentrenderbuffer: gl_renderbuffer];}

As mentioned above, we need to describe the arm first and describe the arm first. Because the cube is scaled in their respective models, although the same drawcube is called, it is different, here, the arm is set to red and the arm is white. The running effect is as follows:

We can slide the shoulder slider to rotate the entire arm and arm, and slide the elbow slider to only rotate the arm. I believe you will have a deeper understanding of 3D transformation.

3. Smooth coloring

1. color mode

OpenGL supports two coloring modes: flat and smooth, also known as Gouraud ). Monotonous coloring means that the color of the entire element is the color of any of its vertices, such as the Red Triangle effect in tutorial 02. In smooth coloring, each vertex is independently colored, points between vertices are calculated based on the even interpolation of the colors of all vertices. The following shows a smooth colored Cube:

-(Void) updatecolorcubetransform {ksmatrixloadidentity (& _ modelviewmatrix); kstranslate (& _ modelviewmatrix, 0.0,-2,-5.5); ksrotate (& _ modelviewmatrix, _ rotatecolorcube, 0.0, 1.0, 0.0); // load the Model-View matrix gluniformmatrix4fv (_ modelviewslot, 1, gl_false, (glfloat *) & _ modelviewmatrix. M [0] [0]);}-(void) drawcolorcube {glfloat vertices [] = {-0.5f,-0.5f, 0.5f, 1.0, 0.0, 0.0, 1.0, // red-0.5f, 0.5f, 0.5f, 1.0, 1.0, 0.0, 1.0, 0.0, // yellow 0.5f, 0.5f, 0.5f, 0.0, 1.0, 1.0, // blue 0.5f, -0.5f, 0.5f, 1.0, 1.0, 1.0, 1.0, // white 0.5f,-0.5f,-0.5f, 1.0, 1.0, 0.0, 1.0, // yellow 0.5f, 0.5f, -0.5f, 1.0, 0.0, 0.0, 1.0, // red-0.5f, 0.5f,-0.5f, 1.0, 1.0, 1.0, 1.0, // white-0.5f,-0.5f, -0.5f, 0.0, 0.0, 1.0, 1.0, // blue}; glubyte indices [] = {// front face 0, 3, 2, 0, 2, 1, // back face 7, 5, 4, 7, 6, 5, // left face 0, 1, 6, 0, 6, 7, // right face 3, 4, 5, 3, 5, 2, // up face 1, 2, 5, 1, 5, 6, // down face 0, 7, 4, 0, 4, 3}; glvertexattribpointer (_ positionslot, 3, gl_float, gl_false, 7 * sizeof (float), vertices );Glvertexattribpointer (_ colorslot,4, gl_float, gl_false, 7 * sizeof (float), vertices + 3); Glenablevertexattribarray (_ positionslot ); Glenablevertexattribarray (_ colorslot ); Gldrawelements (gl_triangles, sizeof (indices)/sizeof (glubyte), gl_unsigned_byte, indices); gldisablevertexattribarray (_ colorslot );}

The updatecolorcubetransform code for updating the color cube model view should be well understood and should not be described here. Let's focus on the drawcolorcube method. Remember to compare it with the drawcube method. First declare the vertex. The vertex here includes not only the coordinate information, but also the color information! Of course, the vertex information can also include more information, such as the normal information and texture coordinates. Then we use the bold blue line above to specify the color data source. Format: the starting data is vertices + 3 (there are three gl_float positions in front ); the step for reading the next data is stride 7 * sizeof (float), that is, the interval from a color data to the next color data is 7 (3 location information + 4 color information ); each color information is gl_float, and the four colors are combined into one color. Then we can enable the color slot so that the color will take effect on the vertex.

2. Rendering colorcube

Add the code that depicts the colorcube to the render immediately after the set viewport:

 
// Draw color cube // [self updatecolorcubetransform]; [self drawcolorcube];

Compile and run the program, and you will see a colored cube. The colors on each face are uniformly interpolated by four vertices.

 

4. Remove the backend

You may find that the Cube does not look like a cube? That's because the back is not removed. By default, OpenGL ES does not remove the back, that is, it depicts both our faces and our backs, so it looks strange. OpenGL ES provides the glfrontface function to enable us to set that side as the front side. By default, the counter-clockwise side is regarded as the front side (gl_ccw ). We can call glcullface to specify the surface (gl_front, gl_back, gl_front_and_back) We want to remove. By default, gl_back is removed. To make the removal take effect, we have to enable it: glable (gl_cull_face ). Here, we only need to call glable (gl_cull_face) in the appropriate place, and others can meet our current needs by default. Add at the end of setprojection:

 
Glable (gl_cull_face );

4. To better view the entire cube, let's perform a rotation animation as in tutorial 04:

-(Void) toggledisplaylink {If (_ displaylink = nil) {_ displaylink = [cadisplaylink displaylinkwithtarget: Self selector: @ selector (displaylinkcallback :)]; [_ displaylink addtorunloop: [nsunloop currentrunloop] formode: Unknown];} else {[_ displaylink invalidate]; [_ displaylink removefromrunloop: [nsunloop currentrunloop] formode: Unknown]; _ displaylink = nil ;}} -(void) displaylinkcallback :( cadisplaylink *) displaylink {_ rotatecolorcube + = displaylink. duration * 90; [self render];}

Compile and run the program. The effect is as follows:

 

V. Summary

This article introduces the concept of relative spatial transformation. If you can grasp the hierarchical relationship of 3D space, I believe you will have a deep understanding of 3D transformation, in addition, the color usage and backend removal are introduced. Although there are not many things introduced today, every day we take a small step. After a long time, we will finally understand the little details of the 3D treasure.

 

Vi. References

OpenGL programming guide

OpenGL ES 2.0 programming guide

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.