OpenGL ES learning notes (3)-texture and es learning notes
First, I declare that this article is the author's note on learning OpenGL ES Application Development Practice Guide (Android volume). The Code involved is from the original book. If necessary, go to the Source Code address specified in the original book to download.
OpenGL ES Study Notes (2) -- smooth coloring, adaptive width and height, and 3D image generation: Smooth coloring and adaptive width and height are the methods used to simulate real scenarios on mobile terminals, in addition, the 3D angle of view is added through w components, and orthogonal projection and Perspective Projection are used in specific implementation. On this basis, this article will build a more exquisite 3D scenario. 3D effects are essentially a combination of points, straight lines, and triangles. Textures cover images or photos on the surface of objects to form exquisite details. The implementation is divided into two steps: 1) Loading texture images into OpenGL; 2) displaying them on the object surface. (It is a bit like loading an elephant into the refrigerator in several steps ~~~) However, in the implementation process, it involves the management of the coloring program, the different texture filtering modes, and the new class structure of vertex data. The following will explain the issues one by one:
I. Texture Loading
By overwriting the texture on the surface of an object, the coordinates are aligned. The coordinates of two-dimensional textures in OpenGL are different from those of computer images.
It can be seen that the difference between the two is that the cross axis is flipped 180 degrees. In addition, OpenGL ES does not support square textures, but each dimension must be a power of 2.
The method parameter list for loading texture images should include the Android Context and resource ID, and the returned value should be the OpenGL texture ID. Therefore, the method declaration is as follows:
public static int loadTexture(Context context, int resourceId) {}
First, create a texture object, which is the same as the normal OpenGL object generation mode. After the texture is successfully generated, declare that the texture call should be applied to this texture object. Next, load the bitmap data, and OpenGL reads the bitmap data and copies it to the previously bound texture object.
final int[] textureObjectIds = new int[1];glGenTextures(1, textureObjectIds, 0);if (textureObjectIds[0] == 0) { if (LoggerConfig.ON) { Log.w(TAG, "Could not generate a new OpenGL texture object."); } return 0;}
final BitmapFactory.Options options = new BitmapFactory.Options();options.inScaled = false;// Read in the resourcefinal Bitmap bitmap = BitmapFactory.decodeResource( context.getResources(), resourceId, options); if (bitmap == null) { if (LoggerConfig.ON) { Log.w(TAG, "Resource ID " + resourceId + " could not be decoded."); } glDeleteTextures(1, textureObjectIds, 0); return 0; } // Bind to the texture in OpenGLglBindTexture(GL_TEXTURE_2D, textureObjectIds[0]);
The two codes do not need to be described much. options. inScaled = false indicates that OpenGL reads original data in non-compressed form of the image. Note the following when reading bitmap data using OpenGL: texture filtering. The OpenGL texture filtering mode is shown in the following table: (-- the content comes from the original book)
GL_NEAREST |
Nearest Neighbor Filtering |
GL_NEAREST_MIPMAP_NEAREST |
Use the nearest neighbor filter of MIP textures |
GL_NEAREST_MIPMAP_LINEAR |
Use the nearest neighbor filter for interpolation between MIP texture levels |
GL_LINEAR |
Bilinear Filtering |
GL_LINEAR_MIPMAP_NEAREST |
Bilinear filtering using MIP textures |
GL_LINEAR_MIPMAP_LINEAR |
Tri-linear filtering (bilinear Filtering Based on interpolation at the MIP level) |
For the specific interpretation and implementation of each type of filter, please Google it on your own. Here, GL_LINEAR_MIPMAP_LINEAR is used for downgrading and GL_LINEAR is used for downgrading.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
The last step of texture loading is to copy bitmap to the currently bound texture object:
texImage2D(GL_TEXTURE_2D, 0, bitmap, 0);
After binding, you still need to do some subsequent operations, such as recycling bitmap objects (large bitmap memory usage), generating MIP textures, touching texture binding, and returning texture object IDs.
glGenerateMipmap(GL_TEXTURE_2D);// Recycle the bitmap, since its data has been loaded into OpenGL.bitmap.recycle();// Unbind from the texture.glBindTexture(GL_TEXTURE_2D, 0);return textureObjectIds[0];
Ii. Texture coloring Tool
Before continuing to use GLSL to write a coloring program, Let's explain a problem that we missed:
OpenGL Shading Language is a short custom program written by developers, they are executed on the graphics card's GPU (Graphic Processor Unit graphics processing Unit), instead of a fixed part of the rendering pipeline, so that different layers of the rendering pipeline are programmable. For example, view conversion and projection conversion.
The GLSL (GL Shading Language) Shader code is divided into two parts: Vertex Shader and Fragment ), sometimes there is a Geometry Shader ). The Vertex coloring tool is responsible for running Vertex coloring. It can get the current state in OpenGL, and GLSL built-in variables are passed. GLSL uses C language as the basic high-order coloring language, avoiding the complexity of using assembly language or hardware specification language.
This section comes from Baidu encyclopedia and requires attention: Programs Written in GLSL are executed in the GPU, meaning that the coloring program does not take up CPU time, this inspired us to use GLSL in some time-consuming rendering programs (Real-time camera filters), perhaps more efficient than NDK for data processing. In the future, I will introduce the texture coloring program. Similarly, to support textures, you must change the vertex and fragment pasters.
uniform mat4 u_Matrix;attribute vec4 a_Position; attribute vec2 a_TextureCoordinates;varying vec2 v_TextureCoordinates;void main() { v_TextureCoordinates = a_TextureCoordinates; gl_Position = u_Matrix * a_Position; }
precision mediump float; uniform sampler2D u_TextureUnit; varying vec2 v_TextureCoordinates; void main() { gl_FragColor = texture2D(u_TextureUnit, v_TextureCoordinates); }
In the preceding Vertex coloring tool, the_TextureCoordinates variable is of the vec2 type, because the two components of texture coordinates are S coordinates and T coordinates. In the fragment shader, u_TextureUnit of the sampler2D type indicates an array that receives 2D Texture data.
3. Update vertex data class structure
First, different types of vertex data are allocated to different classes. Each class represents the type of a physical object. Initialize the VertexArray object in the constructor of the class. VertexArray is implemented in the same way as described in the preceding article. FloatBuffer is used to store vertex matrix data in local code, create a common method to associate the attributes of the shader with the vertex data.
private final FloatBuffer floatBuffer;public VertexArray(float[] vertexData) { floatBuffer = ByteBuffer .allocateDirect(vertexData.length * BYTES_PER_FLOAT) .order(ByteOrder.nativeOrder()) .asFloatBuffer() .put(vertexData);} public void setVertexAttribPointer(int dataOffset, int attributeLocation, int componentCount, int stride) { floatBuffer.position(dataOffset); glVertexAttribPointer(attributeLocation, componentCount, GL_FLOAT, false, stride, floatBuffer); glEnableVertexAttribArray(attributeLocation); floatBuffer.position(0);}
public Table() { vertexArray = new VertexArray(VERTEX_DATA);}
The VERTEX_DATA parameter passed in the constructor is vertex data.
private static final float[] VERTEX_DATA = { // Order of coordinates: X, Y, S, T // Triangle Fan 0f, 0f, 0.5f, 0.5f, -0.5f, -0.8f, 0f, 0.9f, 0.5f, -0.8f, 1f, 0.9f, 0.5f, 0.8f, 1f, 0.1f, -0.5f, 0.8f, 0f, 0.1f, -0.5f, -0.8f, 0f, 0.9f };
In this group of data, x = 0, y = 0 corresponds to texture S = 0.5, T = 0.5, x =-0.5, y =-0.8 corresponds to texture S = 0, T = 0.9. The reason for this relationship is that we can see the comparison between OpenGL texture coordinates and computer image coordinates mentioned above. As for the data in the texture part, 0.1 and 0.9 are used as the T coordinates to avoid flattening the texture, while cropping the texture and intercepting the part from 0.1 to 0.9.
After vertexArray is initialized, use its setVertexAttribPointer () method to bind vertex data to the coloring er program.
public void bindData(TextureShaderProgram textureProgram) { vertexArray.setVertexAttribPointer( 0, textureProgram.getPositionAttributeLocation(), POSITION_COMPONENT_COUNT, STRIDE); vertexArray.setVertexAttribPointer( POSITION_COMPONENT_COUNT, textureProgram.getTextureCoordinatesAttributeLocation(), TEXTURE_COORDINATES_COMPONENT_COUNT, STRIDE);}
This method calls setVertexAttribPointer () for each vertex and obtains the position of each attribute from the shader. You can use getPositionAttributeLocation () to bind location data to the referenced shader attribute, and use getTextureCoordinatesAttributeLocation () to bind the texture coordinate data to the referenced shader attribute.
After binding, you only need to call glDrawArrays () for plotting.
public void draw() { glDrawArrays(GL_TRIANGLE_FAN, 0, 6);}
Iv. Coloring Program
As the texture is used, more coloring programs are used, so you need to add management classes for the coloring program. Based on the colorator classification, we create the texture coloring machine class and the color coloring machine class respectively, and abstract their commonalities to form the base class ShaderProgram. The TextureShaderProgram and ColorShaderProgram classes inherit this implementation respectively. The main function of ShaderProgram is to read the colorant program according to the Android Context and the resource ID of the colorant. Its constructor parameter list is as follows:
protected ShaderProgram(Context context, int vertexShaderResourceId, int fragmentShaderResourceId) { ……}
The steps for reading the coloring er program should be similar to those described earlier in the ShaderHelper class, including the steps for compiling and linking.
public static int buildProgram(String vertexShaderSource, String fragmentShaderSource) { int program; // Compile the shaders. int vertexShader = compileVertexShader(vertexShaderSource); int fragmentShader = compileFragmentShader(fragmentShaderSource); // Link them into a shader program. program = linkProgram(vertexShader, fragmentShader); if (LoggerConfig.ON) { validateProgram(program); } return program;}
The implementation of compileVertexShader (Compilation) and linkProgram (Link) has been described in detail in previous notes. The ShaderProgram constructor can call the buildProgram () method.
program = ShaderHelper.buildProgram( TextResourceReader.readTextFileFromResource( context, vertexShaderResourceId), TextResourceReader.readTextFileFromResource( context, fragmentShaderResourceId));
After obtaining the coloring program, define OpenGL for subsequent rendering.
public void useProgram() { // Set the current OpenGL shader program to this program. glUseProgram(program);}
The program class TextureShaderProgram and ColorShaderProgram In the constructor call the constructor of the parent class, and read the uniform and attribute location in the texture coloring tool.
public TextureShaderProgram(Context context) { super(context, R.raw.texture_vertex_shader, R.raw.texture_fragment_shader); // Retrieve uniform locations for the shader program. uMatrixLocation = glGetUniformLocation(program, U_MATRIX); uTextureUnitLocation = glGetUniformLocation(program, U_TEXTURE_UNIT); // Retrieve attribute locations for the shader program. aPositionLocation = glGetAttribLocation(program, A_POSITION); aTextureCoordinatesLocation = glGetAttribLocation(program, A_TEXTURE_COORDINATES);}
Next, pass the Matrix to uniform, which is described in previous notes.
// Pass the matrix into the shader program.glUniformMatrix4fv(uMatrixLocation, 1, false, matrix, 0);
Texture transfer is more complex than matrix transfer, because Texture is not directly transferred, but saved using Texture Unit, because a GPU can only draw a limited number of textures at the same time, use these texture units to represent the active textures being drawn.
// Set the active texture unit to texture unit 0.glActiveTexture(GL_TEXTURE0);// Bind the texture to this unit.glBindTexture(GL_TEXTURE_2D, textureId);// Tell the texture uniform sampler to use this texture in the shader by// telling it to read from texture unit 0.glUniform1i(uTextureUnitLocation, 0);
GlActiveTexture (GL_TEXTURE0) indicates to set the active Texture unit to the texture unit 0, call glBindTexture to bind the texture pointed to by textureId to the texture unit 0, and finally, call glUniform1i to pass the selected Texture unit to u_TextureUnit (sampler2D) in the fragment shader ).
The implementation of the color coloring tool class is similar to that of the texture coloring tool class. You can also obtain the uniform and attribute positions in the constructor. However, you only need to pass the Matrix to set the uniform value.
public void setUniforms(float[] matrix) { // Pass the matrix into the shader program. glUniformMatrix4fv(uMatrixLocation, 1, false, matrix, 0);}
5. Texture Rendering
Through preparation, the vertex data and the coloring program have been placed in different classes. Therefore, texture painting can be performed in the rendering class through the previous implementation. The updated member variables and constructor of the AirHockeyRenderer class are as follows:
private final Context context;private final float[] projectionMatrix = new float[16];private final float[] modelMatrix = new float[16];private Table table;private Mallet mallet; private TextureShaderProgram textureProgram;private ColorShaderProgram colorProgram; private int texture;public AirHockeyRenderer(Context context) { this.context = context;}
Initialization variables mainly include cleaning the screen, initializing vertex arrays and coloring programs, and loading textures.
@Overridepublic void onSurfaceCreated(GL10 glUnused, EGLConfig config) { glClearColor(0.0f, 0.0f, 0.0f, 0.0f); table = new Table(); mallet = new Mallet(); textureProgram = new TextureShaderProgram(context); colorProgram = new ColorShaderProgram(context); texture = TextureHelper.loadTexture(context, R.drawable.air_hockey_surface);}
Finally, the method of drawing an object in onDrawFrame () is implemented by calling the method of the front coloring class and object class (vertex data.
@Overridepublic void onDrawFrame(GL10 glUnused) { // Clear the rendering surface. glClear(GL_COLOR_BUFFER_BIT); // Draw the table. textureProgram.useProgram(); textureProgram.setUniforms(projectionMatrix, texture); table.bindData(textureProgram); table.draw(); // Draw the mallets. colorProgram.useProgram(); colorProgram.setUniforms(projectionMatrix); mallet.bindData(colorProgram); mallet.draw();}
To sum up, this note involves the following content:
1) load the texture and display it on the object;
2) re-organize the program to manage switching between multiple colorers and vertex data;
3) Adjust the texture to adapt to the shape they will be drawn. You can adjust the texture coordinates or stretch or flatten the texture itself;
4) The texture cannot be directly transmitted. It needs to be bound to the texture unit and then transmitted to the colorant;