OpenGL ES study notes (2)-smooth coloring, adaptive width and height and 3D image generation, es study notes

First of all, this article is a note for the author to learn "OpenGL ES Application Development Practice Guide (Android Volume)". The code involved is from the original book. If necessary, please download it from the source code specified in the original book.

"Android Study Notes-Basic Usage of OpenGL ES, Drawing Process and Shader Compilation" implements the OpenGL ES Android version of HelloWorld, and clarifies the drawing process of OpenGL ES, as well as the process and precautions of compiling shaders. This article will explain how OpenGL ES makes images more realistic on mobile devices from the perspective of graphic display in the real world. First of all, the object has various color changes. In OpenGL ES, in order to generate a more realistic image, smooth coloring of the image is a common operation. Secondly, there is switching between horizontal and vertical screens on mobile devices. When displaying images, it is necessary to consider the aspect ratio of the screen according to the screen orientation so that the image is not distorted by the screen switching. Finally, the objects in reality are all three-dimensional. We observe the objects with a certain angle of view, so we need to realize the display of three-dimensional images in OpenGL ES. This article mainly includes the following:

1. Smooth coloring

Smooth coloring is achieved by defining different colors on each point of the triangle and mixing these colors on the surface of the triangle. So, how to use triangles to form the surface of an actual object? How to mix different colors defined at vertices?

First introduce the concept of triangular fan. Starting with a center vertex, use two adjacent vertices to create the first triangle, and each subsequent vertex will create a triangle, fanning out around the starting center point. To close the sector, simply repeat the second point at the end. The data specified by GL_TRIANGLE_FAN in OpenGL represents a triangle fan.

glDrawArrays (GL_TRIANGLE_FAN, 0, 6);

In the above code, the parameter list of glDrawArrays is:

// C function void glDrawArrays (GLenum mode, GLint first, GLsizei count)

public static native void glDrawArrays (

int mode,

int first,

int count

);

It can be seen that 0 represents the position of the first vertex, and 6 represents that 6 vertices draw a triangle fan.

Next, the color of each point will be defined as a vertex attribute, which requires two parts of work: (1) vertex data; (2) shader. The vertex data involved in "Android Study Notes-Basic Usage of OpenGL ES, Drawing Process and Shader Compilation" only has X / Y coordinates. Adding color attributes adds R / G / B values after vertex coordinates. The specific format is as follows:

float [] tableVerticesWithTriangles = {

// Order of coordinates: X, Y, R, G, B

// Triangle Fan

0f, 0f, 1f, 1f, 1f,

-0.5f, -0.5f, 0.7f, 0.7f, 0.7f,

0.5f, -0.5f, 0.7f, 0.7f, 0.7f,

0.5f, 0.5f, 0.7f, 0.7f, 0.7f,

-0.5f, 0.5f, 0.7f, 0.7f, 0.7f,

-0.5f, -0.5f, 0.7f, 0.7f, 0.7f,

};

Similarly, compared to the vertex shader mentioned in the previous article, the color attribute is increased.

attribute vec4 a_Position;

attribute vec4 a_Color;

varying vec4 v_Color;

void main ()

{

v_Color = a_Color;

gl_Position = a_Position;

gl_PointSize = 10.0;

}

What needs to be explained here is the varying variable, which is the key to smoothing. Taking line AB as an example, if the a_Color of vertex A is red and the a_Color of vertex B is green, then from A to B, it will be a mixture of red and green. The closer to vertex A, the redder the mixed color appears; the closer to vertex B, the greener the mixed color appears. As for the hybrid algorithm, the most basic linear interpolation can be used.

When blending on a triangular surface, it is the same as the thread interpolation of a straight line. Each color is the strongest near its vertex, and it becomes darker toward other vertices. The relative weight of each color is determined by the ratio. It is the area ratio, not the length used for linear interpolation.

Back to AirHockeyRenderer, first reflect the color attribute in onSurfaceCreated.

aPositionLocation = glGetAttribLocation (program, A_POSITION);

aColorLocation = glGetAttribLocation (program, A_COLOR);

// Bind our data, specified by the variable vertexData, to the vertex

// attribute at location A_POSITION_LOCATION.

vertexData.position (0);

glVertexAttribPointer (aPositionLocation, POSITION_COMPONENT_COUNT, GL_FLOAT,

false, STRIDE, vertexData);

glEnableVertexAttribArray (aPositionLocation);

// Bind our data, specified by the variable vertexData, to the vertex

// attribute at location A_COLOR_LOCATION.

vertexData.position (POSITION_COMPONENT_COUNT);

glVertexAttribPointer (aColorLocation, COLOR_COMPONENT_COUNT, GL_FLOAT,

false, STRIDE, vertexData);

glEnableVertexAttribArray (aColorLocation);

The process is basically the same as in "Android Study Notes-Basic Usage of OpenGL ES, Drawing Process and Shader Compilation", but with the addition of color attributes. aColorLocation is the position of the color attribute, and STRIDE is the span, that is, the tableVerticesWithTriangles array contains not only the coordinates of the vertex, but also the color attribute, so when taking the vertex coordinates, you need to cross the color attribute.

vertexData.position (POSITION_COMPONENT_COUNT) specifies that when OpenGL reads color attributes, it needs to start from the position of the first color attribute, not from the first position attribute.

glVertexAttribPointer () associates color data with the attribute vec4 a_Color in the shader. The parameter list of glVertexAttribPointer is as follows, which is achieved by calling the native method glVertexAttribPointerBounds.

// C function void glVertexAttribPointer (GLuint indx, GLint size, GLenum type, GLboolean normalized, GLsizei stride, const GLvoid * ptr)

private static native void glVertexAttribPointerBounds (

int indx,

int size,

int type,

boolean normalized,

int stride,

java.nio.Buffer ptr,

int remaining

);

public static void glVertexAttribPointer (

int indx,

int size,

int type,

boolean normalized,

int stride,

java.nio.Buffer ptr

) {

glVertexAttribPointerBounds (

indx,

size,

type,

normalized,

stride,

ptr,

ptr.remaining ()

);

}

After associating the color attributes, just draw the vertex array in the onDrawFrame of AirHockeyRenderer, and OpenGL will automatically read the color attributes from the vertex data.

// Draw the table.

glDrawArrays (GL_TRIANGLE_FAN, 0, 6);

// Draw the center dividing line.

glDrawArrays (GL_LINES, 6, 2);

// Draw the first mallet.

glDrawArrays (GL_POINTS, 8, 1);

// Draw the second mallet.

glDrawArrays (GL_POINTS, 9, 1);

After completing the above process, you can see the effect as shown.

In this section, we add color attributes to the vertex data and vertex shader, and use span to read in the data, and finally interpolate on the triangle plane by varying to make the color transition between the two points smooth.

Second, adaptive width and height

In Android development, different layouts need to be loaded when switching between horizontal and vertical screens. When OpenGL is used, there are still adaptations for screen size and orientation. OpenGL uses projection to map the real world to the screen. This way of mapping makes it always look correct in different screen sizes or orientations. The mapping is achieved by matrix transformation, therefore, this section involves some basic content of linear algebra.

First, we need to understand the normalized coordinate space and the virtual coordinate space. The previously used normalized coordinate space, which maps all objects to the x-axis and y-axis [-1,1] space, is independent of the actual size and shape of the screen. Therefore, on an actual Android device, taking the 1280 * 720 resolution as an example, the square in the normalized coordinate space will be squashed. The virtualized coordinate space fixes the smaller range within [-1,1], and adjusts the larger range according to the screen size.

Orthogonal projection is the core of transforming the virtual coordinate space into a normalized coordinate space. The orthogonal projection matrix is similar to the translation matrix, and it maps the left, right, up, down, and far things to the normalized device coordinates [-1,1]. The orthoM () method in the android.opengl package can generate an orthogonal projection matrix, whose parameter list is:

/ **

* Computes an orthographic projecti

on matrix.

*

* @param m returns the result

* @param mOffset

* @param left

* @param right

* @param bottom

* @param top

* @param near

* @param far

* /

public static void orthoM (float [] m, int mOffset,

float left, float right, float bottom, float top,

float near, float far) {

...

}

The format of the generated orthogonal projection matrix is as follows:

To understand how orthogonal projection matrix transforms virtual coordinate space and normalized coordinate space, the best way is to give an example.

Taking the horizontal screen mode of 1280 * 720 resolution as an example, the range of the x-axis of the virtualized coordinate space is [-1280 / 720,1280 / 720], that is [-1.78,1.78], and the screen itself is the normalized coordinate space [-1,1], for example, the point in the upper right corner, the coordinate in the normalized coordinate space is [1,1], and the coordinate in the virtualized coordinate space is [1.78,1]. After the above orthogonal projection matrix conversion, the conversion returns a matrix.

Translating the above process into code is mainly reflected in three places: 1) shader; 2) create orthogonal matrix; 3) pass the matrix to the shader.

uniform mat4 u_Matrix;

attribute vec4 a_Position;

attribute vec4 a_Color;

varying vec4 v_Color;

void main ()

{

v_Color = a_Color;

gl_Position = u_Matrix * a_Position;

gl_PointSize = 10.0;

}

Compared with the previous shader, when gl_Position is set, u_Matrix is multiplied by a_Position, where u_Matrix is the orthogonal projection matrix on the left side, and a_Position is the virtualized coordinate space coordinate on the right side. Coordinate space coordinates.

final float aspectRatio = width> height?

(float) width / (float) height:

(float) height / (float) width;

if (width> height) {

// Landscape

orthoM (projectionMatrix, 0, -aspectRatio, aspectRatio, -1f, 1f, -1f, 1f);

} else {

// Portrait or square

orthoM (projectionMatrix, 0, -1f, 1f, -aspectRatio, aspectRatio, -1f, 1f);

}

orthoM generates orthogonal projection matrices for horizontal and vertical screens by passing in different left-right and bottom-top parameters.

// Assign the matrix

glUniformMatrix4fv (uMatrixLocation, 1, false, projectionMatrix, 0);

// C function void glUniformMatrix4fv (GLint location, GLsizei count, GLboolean transpose, const GLfloat * value)

public static native void glUniformMatrix4fv (

int location,

int count,

boolean transpose,

float [] value,

int offset

);

Finally, the orthogonal projection matrix generated above is passed to the shader through the glUniformMatrix4fv method. The effect is as shown, it can be seen that the same shape is maintained in the horizontal and vertical screen mode.

Three, three-dimensional image generation

In the previous section, in order to make the object adapt to the change of the aspect ratio of the screen, Orthographic Projection is used; in order to display the three-dimensional effect, this section needs to use Perspective Projection (Perspective Projection). If you are interested in the derivation process of the projection matrix, you can refer to "Deriving Projection Matrices", which introduces the derivation and use of orthogonal projection and perspective projection in detail.

OpenGL divides the w component of the Clip Space to perform perspective division to obtain a three-dimensional effect. Therefore, theoretically, as long as the w component in the vertex coordinates tableVerticesWithTriangles array (while setting the z component) is updated to an appropriate value, OpenGL can automatically achieve three-dimensional display. However, in actual operation, the value of the w component is generally not hard-coded, but is generated through a perspective projection matrix. The general perspective projection matrix is as follows:

Create perspective projection matrix using code:

public static void perspectiveM (float [] m, float yFovInDegrees, float aspect, float n, float f) {

// Obtain the angle of view, which is (Alpha) in the formula

final float angleInRadians = (float) (yFovInDegrees * Math.PI / 180.0);

// Calculate the focal length, which is a in the formula

final float a = (float) (1.0 / Math.tan (angleInRadians / 2.0));

// Generate matrix

m [0] = a / aspect;

m [1] = 0f;

m [2] = 0f;

m [3] = 0f;

m [4] = 0f;

m [5] = a;

m [6] = 0f;

m [7] = 0f;

m [8] = 0f;

m [9] = 0f;

m [10] =-((f + n) / (f-n));

m [11] = -1f;

m [12] = 0f;

m [13] = 0f;

m [14] =-((2f * f * n) / (f-n));

m [15] = 0f;

}

Call this method in onSurfaceChanged to create a perspective matrix. Here, a 45-degree field of view is used, and the distance from the near plane is 1 and the distance from the far plane is 10. Because the right-handed coordinate system is used, the visual vertebral body has a z value from Start at the -1 position and end at the z value of -10.

MatrixHelper.perspectiveM (projectionMatrix, 45, (float) width / (float) height, 1f, 10f);

Because the position of z is not specified, it is at the position of 0 by default, so the object needs to be translated. The translation uses the model matrix, which can be generated by OpenGL built-in functions.

setIdentityM (modelMatrix, 0);

translateM (modelMatrix, 0, 0f, 0f, -2.5f);

When using the model matrix and the perspective matrix at the same time, you need to pay attention to the order of matrix multiplication. It is intuitive to understand that translating an object in space along any axis will not change the shape of the object observed at a relative viewpoint, and perspective will change. Therefore, you should first translate the object and then do perspective.

In the formula, the projection matrix is placed on the left, and the model matrix is placed on the right.

final float [] temp = new float [16];

multiplyMM (temp, 0, projectionMatrix, 0, modelMatrix, 0);

System.arraycopy (temp, 0, projectionMatrix, 0, temp.length);

Finally, one more change is needed to see the real 3D effect, which is the rotation change. The rotation matrix uses sine and cosine trigonometric functions to convert the rotation angle into a scaling factor. Similarly, OpenGL provides a method for implementing the rotation matrix:

/ **

* Rotates matrix m in place by angle a (in degrees)

* around the axis (x, y, z).

*

* @param m source matrix

* @param mOffset index into m where the matrix starts

* @param a angle to rotate in degrees

* @param x X axis component

* @param y Y axis component

* @param z Z axis component

* /

public static void rotateM (float [] m, int mOffset,

float a, float x, float y, float z) {

synchronized (sTemp) {

setRotateM (sTemp, 0, a, x, y, z);

multiplyMM (sTemp, 16, m, mOffset, sTemp, 0);

System.arraycopy (sTemp, 16, m, mOffset, 16);

}

}

Rotate the m matrix by a degree to rotate the x / y / z axis separately. This rotates the object -60 degrees around the x axis.

rotateM (modelMatrix, 0, -60f, 1f, 0f, 0f);

After completing all the above steps, you can get the following.

to sum up:

(1) Smooth color transition between vertices through interpolation;

(2) Maintain the shape of the object when switching between horizontal and vertical screens through orthogonal projection;

(3) Three-dimensional display of objects through perspective projection, translational change, and rotation change.