OpenGL ES 05-texture ing our rectangle
I have decided to introduce texture ing in advance because it may be easier to map a texture to an object instead of facing a multi-faceted (or 3D object ). In addition, it seems that this is the most desired knowledge for iPhone OpenGL ES programmers, so I will insist on Texture ing until now.
I know that I have skipped many details supported by OpenGL, so that you can experiment and draw objects on the screen without introducing OpenGL history over and over again, this section describes the differences between OpenGL and OpenGL ES. Sometimes, I will skip some technical details.
This time, I will introduce a lot of details, that is, this is a very long tutorial.
Even so, most of the Code just loads the texture into our project and puts it into the OpenGL engine so that OpenGL can use it. This is not complicated. It only requires a little effort to call the iPhone SDK.
Texture preparation
Before we start to use a texture, We need to load it into our application, format it in OpenGL format, and tell OpenGL where to find it. Once we do our previous work well, other work is as easy as coloring rectangles in our previous tutorial.
Enable Xcode and enable EAGLView. h In the editing area. First, we need to provide a variable required by OpenGL. Add the following statement:
GLuint textures [1];
Obviously, this is an array of GLuint. You have seen me use GLfloat before. Again, GLuint is a Rename (typedef) of unsigned integer in OpenGL ). Instead of using Objective_C type parameters, you should always use a RENAME like GLxxxx. Because these are the OpenGL naming parameters defined for OpenGL, we are using OpenGL instead of the development environment.
Next, we will call the OpenGL function glGenTextures () to fill in this variable. We only declare it now, and then we need to overwrite the glGenTextures () function and this variable.
In the prototype of this method, add the following functions:
-(Void) loadTexture;
Here we will add code to load the texture.
Add CoreGraphics Framework to your project
To load the texture and process it, we will use the CoreGraphics framework because it provides all the methods we need, you do not need to write all the low-level code you see in the Windows OpenGL tutorial.
In the Xcode "Groups & Files" column, right-click the "Frameworks" group and choose Add-> Existing Frameworks...
In the search box, enter "CoreGraphics. framework" and view the results in the folder. These results meet your application goals (iPhone SDK 2.2.1 In my case ). Click a folder and add it to your project (Framework of the folder icon ).
Next, we need to add a texture image in our project, so it needs to be included in our application package. Download the texture checkerplate.png and save it in the project directory. You can Add this image to your project resource directory by right-clicking the Resource Directory and selecting Add-> Existing Files... to Add the image.
Load textures to our applications and OpenGL.
Switch to EAGLView. m and execute the loadTexture function.
-(Void) loadTexture {
}
The Code in the following column adds the sequence to this function, so you only need to add it after each row. First, we need to add this image to our application and use the following code:
CGImageRef textureImage = [UIImage imageNamed: @ "checkerplate.png"]. CGImage;
If (textureImage = nil ){
NSLog (@ "Failed to load texture image ");
Return;
}
This CGImageRef is a data type of CoreGraphics, in order to collect all the information of the image. To obtain this information, we need to use the UIImage class method imageNamed: Create an autorelease 'd UIImage to find the file name of the main package of our application.
To get this information all we do is use the UIImage class method imageNamed: which creates an autorelease 'd UIImage finding the file by it's name in our Application's main bundle. CGImageRef automatically created by UIImage and CGImage in the accessed UIImage class.
Now, we need to obtain the size of the image for the following reference.
NSInteger texWidth = CGImageGetWidth (textureImage );
NSInteger texHeight = CGImageGetHeight (textureImage );
The cgimageref data contains the image width and height, but we cannot directly access it. We need to use the above two extraction functions.
This cgimageref, as its data type name implies, does not contain image data and only describes the image data. Therefore, we need to open up some memory to include the image data.
Glubyte * texturedata = (glubyte *) malloc (texwidth * texheight * 4 );
The allocated positive data size should be * width * height * 4. Remember that in the previous tutorial, OpenGL only supports rgba values? Each pixel occupies four bytes, that is, one rgba color occupies one byte.
Now, we need to call some absolutely important functions (open your mouth ?? Foreigners are really funny)
Cgcontextref texturecontext = cgbitmapcontextcreate (
Texturedata,
Texwidth,
Texheight,
8, texWidth * 4,
CGImageGetColorSpace (textureImage ),
KCGImageAlphaPremultipliedLast );
CGContextDrawImage (textureContext,
CGRectMake (0.0, 0.0, (float) texWidth, (float) texHeight ),
TextureImage );
Cgcontextrelease (texturecontext );
First, as the name implies, this CoreGraphics function returns a quartz2d drawing handle. basically, we define a CoreGraphics pointer pointing to our texture data and telling it about the texture data and format.
Below, we actually draw the Data Pointer from the image we created to the data we opened (texture Data Pointer. This handle contains all the information and data required by OpenGL copied to malloc.
When CoreGraphics is completed, we need to release the texturecontext handle we created.
I know that I have accelerated the explanation of the above code, but we are more interested in other aspects of OpenGL. You can use this code to load any PNG Image Textures added to your project.
Now, we have reached OpenGL programming.
Now, do you still remember the array we defined in the header file? We need to use it now. Look at the next line of code:
Glgentextures (1, & textures [0]);
We need to copy the texture data from our application to the OpenGL engine, so we need to tell OpenGL to open up memory space for it. (We cannot use it directly). Remember that textures [] is defined as gluint? Once glgentextures is called, OpenGL creates a handle or pointer, and each texture we load to OpenGL is unique. The value of openglreturn is not important to us. Every time we use the checkerplate.png texture, we only need to use textures [0. OpenGL is done as we said.
We can also allocate space for multiple textures at a time. For example, if we need to prepare 10 textures for our application. We can do the following:
Gluint textures [10];
Glgentextures (10, & textures [0]);
In our example, we only need one texture, So we open up one.
Next, we need to activate the texture we just generated:
Glbindtexture (gl_texture_2d, textures [0]);
The second parameter is obvious, and its texture is just created. The first parameter is usually gl_texture_2d because all OpenGL ES accept it at this point. "Full" OpenGL allows 1D and 3D textures, but I believe this is still needed in future OpenGL ES compatibility.
Never forget to use it to activate the texture.
Next, we will send our texture data (pointer to texturedata) to OpenGL. OpenGL manages texture data in its aspect (Service surface). Therefore, the data must be converted to hardware-supported formats and saved to OpenGL space. This is a bit tricky, but most OpenGL ES restrictions are the same.
Glteximage2d (gl_texture_2d, 0, gl_rgba, texwidth, texheight, 0, gl_rgba, gl_unsigned_byte, texturedata );
Traverse these parameters. They are:
0. • Target-basically, usually gl_texture_2d labels
0. • level-specifies the texture details. 0 indicates that all details of the image are allowed. A high number indicates n-level mipmap image details. (I don't understand it here. Please let me know .)
0. • internal_format-the internal format and format must be the same. Both are GL_RGBA. Large.
0. • width-width of the image Watermark
0. • height-height of the image Watermark
0. • border-must always be set to 0. OpenGL ES does not support texture boundaries. border.
0. • format-must be the same as internal_format encoding.
0. • type-the type of each pixel. No, each pixel is four bytes. Therefore, each pixel occupies 1 unsigned integer (4 bytes)
0. • pixels-the actual image data pointer.
Therefore, although there are many parameters, most of them are common sense. You need to enter the Defined variables (textureData, texWidth, & texHeight ). remember to handle your texture data in OpenGL.
Now that we have transferred the data to OpenGL, we can release the previously created texturedata.
Free (textureData );
Three functions are called and used:
GlTexParameterf (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
GlTexParameterf (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
Glable (GL_TEXTURE_2D );
The call of these three functions is the final setting of OpenGL and the state of OpenGL texture ing. The first two functions tell OpenGL how to handle the amplification (close-GL_TEXTURE_MAG_FILTER) and narrowing (long-distance-GL_TEXTURE_MIN_FILTER. For texture ing and GL_LINEAR selection, you must specify at least one.
Finally, we call glEnable () to tell OpenGL to use textures. When we tell OpenGL to execute the drawing code.
Finally, we need to add this function in initWithCoder initialization.
[Self setupView];
[Self loadTexture]; // Add this line
The second line is added after the setupView function.
DrawView Adjustment
This is a hard job. Changing the drawView function is not more difficult than coloring rectangles in our previous tutorial. First, cancel the squareColours [] array and we do not need to use it.
Now, when we color the rectangle, we provide a color value for each vertex of the rectangle. When texture ing is involved, we need to do the same thing, just like telling each vertex what color it is, we tell each vertex the texture sitting mark.
Before doing this, we need to know what texture coordinates are. OpenGL defines that the origin (0, 0) of texture coordinates is in the lower left corner, and each axis is from 0 to 1. Let's take a look at the description of our texture graph:
Refer to our squareVertices [].
Const GLfloat squareVertices [] = {
-1.0, 1.0, 0.0, // Top left
-1.0,-1.0, 0.0, // Bottom left
1.0,-1.0, 0.0, // Bottom right
1.0, 1.0, 0.0 // top right
};
You can see the first texture coordinate. Do we need to specify the texture to the upper left corner? The texture coordinates here are (). Our second point is the square in the upper right corner. Therefore, the texture coordinates (1, 1 ). Then, we go to the bottom right corner, which is the texture coordinate (1, 0), and the bottom left of the final end. We end the texture coordinate (0, 0 ). Therefore, we specify squaretexturecoords [] as follows:
Const glshort squaretexturecoords [] = {
0, 1, // top left
0, 0, // bottom left
1, 0, // bottom right
1, 1 // top right
};
Note: We use glshort instead of glfloat. Add the above Code to your project.
Look, is this similar to our texture array?
Now we need to modify the drawing code. You do not need to draw the code of the triangle, but directly start with the code of the rectangle. The code for drawing a new rectangle is as follows:
Glloadidentity ();
Glcolor4f (1.0, 1.0, 1.0, 1.0); // new
Gltranslatef (1.5, 0.0,-6.0 );
Glrotatef (Rota, 0.0, 0.0, 1.0 );
Glvertexpointer (3, gl_float, 0, squarevertices );
Glenableclientstate (gl_vertex_array );
Gltexcoordpointer (2, gl_short, 0, squaretexturecoords); // new
Glableclientstate (gl_texture_coord_array); // new
Gldrawarrays (gl_triangle_fan, 0, 4 );
Gldisableclientstate (gl_texture_coord_array); // new
OK. This Code contains four new lines of code. I deleted the code for Rectangle coloring in the previous tutorial. The first line of code is to call glcolor4f (), which will be described in the following content.
You should be familiar with the following three lines of code. This is not the vertex or color of an object. Now we only target the texture.
Gltexcoordpointer (2, gl_short, 0, squaretexturecoords); // new
Glableclientstate (gl_texture_coord_array); // new
The first call is to tell OpenGL where the texture coordinate array is stored and in the format. The difference is that each coordinate has only two values (this is of course because it is a 2d texture). We use GLushort for the data type, so here we use GL_SHORT, there is no stride (0) and it points to our coordinate pointer.
Now we will tell the OpenGL client status to map the specified coordinate array for texture.
The call to glDrawArrays () has not changed:
GlDisableClientState (GL_TEXTURE_COORD_ARRAY); // NEW
Remember, when we take different colors for the rectangle and triangle, we close the color array? Again, we need to map the texture (close it), otherwise OpenGL will use this texture to map the triangle.
Save the code and click "Build and Go". You can see the following interface:
Our checkerplate textures are now mapped to rectangles, and our triangles are the same as before.
Further experiments
First, let me explain how to add this line of code before the rectangle is painted:
GlColor4f (1.0, 1.0, 1.0, 1.0); // NEW
However, this is to change the painting color to white, not transparent. Can you tell me why I want to add this line? OpenGL is a state machine, so once we set a state, this State remains unchanged until we change it. So the painting color is set to blue until we change it to white.
OK. During texture ing, OpenGL takes the current color (blue) by the current texture pixel as the final color. This is
R G B
Colour Set: 0.0, 0.0, 0.8, 1.0
Texture Pixel Colour: 1.0, 1.0, 1.0, 1.0
Therefore, when OpenGL executes the painting, the product is:
Colour_Red * Pixel_Colour_Red = Rendered_colour
0.0*1.0 = 0.0
Colour_Green * Pixel_Colour_Green
0.0*0.0 = 0.0
Colour_Blue * Pixel_Colour_Blue
0.8*1.0 = 0.8
Note: If you delete the glColour4f function, the result is changed:
When it is white, the product is:
Set Colour: 1.0, 1.0, 1.0, 1.0
Is mulitplied
Pixel Colour: 0.8, 0.8, 0.8, 1.0
Result: 0.8, 0.8, 0.8, 1.0
This is why we set the color to white.
Okay, that's it!
I have mentioned a lot in this tutorial, but I hope you can see that there are not many codes actually used for texture ing. More work is done when texture is created.