Original post address: http://ogldev.atspace.co.uk/www/tutorial16/tutorial16.html
Texture ing maps an image (or texture) to one or more faces of a 3D model. Textures can be any image, and texture ing can increase the realism of 3D objects. Common textures include bricks, plant leaves, and so on.
In comparison, texture ing is used, and texture ing is not used.
To use texture ing, we must do the following: Install texture in OpenGL to provide texture coordinates for vertices (to map texture to vertices ), use texture coordinates to perform a sampling operation on the texture to obtain a pixel color.
Objects in a 3D space are scaled, rotated, translated, and finally projected onto the screen. Depending on the camera's position and orientation, the final presentation form may vary widely, but according to the texture coordinates, GPU ensures that the final texture ing result is correct. In the raster phase, GPU also interpolation texture coordinates, so that each element has a corresponding texture coordinate. In the segment element shader, the segment element (or pixel) samples the final Texture unit color based on the texture coordinates, and combine these colors with the color of the dot element or the color calculated based on the illumination to output the final color of the pixel. In the following tutorial, we will see that the Texture unit can contain different data and implement many special effects.
OpenGL supports 1d, 2d, 3D, cube, and many other textures, which are used in different technologies. First, we will learn 2D textures. A 2D texture is usually a surface with a height and a width. The result of multiplying the width by the height is the number of texture units. How can we specify the texture coordinates of a vertex? In fact, the texture coordinates of vertices are not the coordinates of the vertices on the texture surface. Otherwise, the limitations will be too large, because the surface of our three-dimensional objects is changing and some are large, some are small. In this case, we need to constantly update the texture coordinates, which is obviously difficult to do. Therefore, the texture coordinate space exists. The texture coordinate ranges of each dimension are []. Therefore, texture coordinates are generally a floating point, multiply the texture coordinates by the texture height or width to obtain the Texture unit position of the vertex on the texture. For example, if the texture position is [0.5, 0.1], the texture width is 320, and the texture height is 200. The corresponding Texture unit position is (0.5, 20) (320*160 = 0.1 and 200 * = 20 ).
Generally, the texture space is called the UV space. U corresponds to the X axis of the 2-dimensional Cartesian coordinate, V corresponds to the Y axis, OpenGL, U axis direction from left to right, V axis direction from bottom to top, as shown in, you can see that the position (0, 0) is in the lower left corner, increase in V, and increase in u to the right:
The triangle in is specified with texture coordinates:
After a triangle changes, its texture coordinates remain unchanged. Assume that before the triangle is raster, its position is as follows.
Texture coordinates are the attributes of triangle vertices. No matter how the triangle changes, the relative positions of texture coordinates remain unchanged for the vertex. Of course, texture coordinates can be dynamically changed in the vertex shader, this is mainly used to achieve some special effects, such as water surface effects. In this tutorial, we will keep the texture coordinates unchanged.
Another concept related to texture ing is "filter". We have discussed a texture coordinate to obtain the corresponding Texture unit. Because the texture coordinate is a floating point, multiply the texture height and width to get a floating point ing coordinate. For example, if we map the texture coordinate to the texture unit (152.34, 745.14), how can we get the Texture unit? The simplest method is to round it to 152,745. This method can work, but in some cases, the effect is not very good. A better solution is: get a 2 × 2 quad Texture unit (152,745), (153,745), (152,744) and (153,744), and then perform linear interpolation between the colors of these texture units, linear interpolation is related to the distance from the Texture unit to (152.34, 745.14). The closer the coordinate is, the greater the effect is. The farther the effect is, the smaller the effect is, this effect is better than selecting texture units directly.
The method that determines which Texture unit is selected is called "filter". The simplest method is the rounding method mentioned above. This filtering method is also called nearst filtering (nearest filtering ), this is a filtering method for point sampling. The linear interpolation-based filtering is called linear filtering. OpenGL provides multiple sampling methods. You can select either of them. Generally, a better filtering effect requires a higher GPU computing capability, which may affect the frame rate. Choosing a better effect and a smoother picture is a balance problem.
Next, let's take a look at how to implement texture ing in OpenGL: to use texture in OpenGL, we need to first learn four concepts: texture objects, texture units, sampling objects, and sampling uniform variables in shader.
The texture object itself includes the data needed by the texture, such as the image data. Based on the stored data format (RGB, rgba, etc.), textures can be divided into one-dimensional textures, two-dimensional textures, and three-dimensional textures. OpenGL provides a convenient function, you only need to specify the starting address of the data and the data format attribute to easily mount the data into the GPU. The texture is usually loaded into the video memory in this way. When loading the texture, you can specify multiple parameters, such as the filtering method. Similar to vertex buffer data, we can also associate textures with handles. After creating a handle and loading the texture, we can switch the texture in real time and bind it with different OpenGL handles without loading data again. At this time, OpenGL driver will guarantee that before rendering, the texture data is loaded into video memory.
Texture objects do not directly deal with shader (texture sampling is actually implemented in shader), but are passed to Shader through a texture unit (Texture unit. In this way, the shader can access the texture object through the Texture unit. Generally, we can use multiple texture units (the number depends on the specific GPU). to bind a texture object to the texture unit 0, we must first activate the Texture unit 0, then you can bind it to texture object A. If you want to use the second texture object, You can activate Texture unit 1 and bind it to the corresponding texture object.
The actual situation may be a bit complicated. A Texture unit can bind multiple texture objects at the same time. As long as the types of these texture objects are different, this type is the target of the texture object, such as 1d, 2d, etc. When binding texture objects and texture units, we must specify the target, for example: we can bind the texture object B of the targe 1D texture object A and target 2D to the same texture unit at the same time.
The sampling operation is usually implemented in the shader. The specific operation is through a sampling function. The sampling function needs to know the sampled texture unit, because the shader may have multiple texture units. Specifically, a set of texture uniform variables are used to differentiate different texture units. These uniform variables correspond to texture units in a one-to-one relationship. When you sample an uniform variable, the texture object corresponding to this variable is used.
Finally, let's take a look at the sample object. Be sure not to confuse it with the sample uniform variable. Texture objects include image data and sample operation parameters. These parameters are part of the sampling status. You can also create a sample object and configure its parameters, and bind it to the texture unit, which will reload the sampling state defined in the texture object. In this article, we have not implemented the sampling object.
The following concepts are summarized:
Main Code:
OpenGL can load texture data in the memory, but does not provide a method to load image files, such as PNG and jpg, into the memory. We use an Open Source image processing library ImageMagick, this library supports Image Processing in multiple formats. The program code directly contains the source code of the library.
Most texture operations are encapsulated in the texture class:
Texture. h
class Texture
{
public:
Texture(GLenum TextureTarget, const std::string& FileName);
bool Load();
void Bind(GLenum TextureUnit);
};
When creating a texture object, we need to specify the target (gl_texture_2d) and image file name. Then, we can call the load function to load the texture data. To bind a texture object to a special Texture unit, use the BIND function.
Texture. cpp
try {
m_pImage = new Magick::Image(m_fileName);
m_pImage->write(&m_blob, "RGBA");
}
catch (Magick::Error& Error) {
std::cout << "Error loading texture '" << m_fileName << "': " << Error.what() << std::endl;
return false;
}
With the above Code, we load the image file into memory (in system memory at this time) and prepare to load OpenGL. We used the magic: Image instance, and provided the image file name. After using this function, we will load the texture image data into the m_pimage object. OpenGL cannot directly access it, therefore, we perform a write operation to write the texture data to the memory represented by the m_blob variable. The image format we use is rgba. Blob (Binary Large Object) is a binary file block, which is often used to store image blocks for use by other programs.
glGenTextures(1, &m_textureObj);
The preceding OpenGL function is similar to glgenbuffers (). The first parameter is a number that specifies the number of texture objects to be created, and the second parameter is a texture object array. In this tutorial, we use a texture object.
glBindTexture(m_textureTarget, m_textureObj);
Through the glbindtexture () function, we bind a texture object, so that all the operations on the texture below are based on this object. If we want to operate other texture objects, we need to re-use glbindtexture () function is bound to another texture object. The second parameter in the glbindtexture () function is the texture object handle. The first parameter is the texture target. Its value may be gl_texture_1d, gl_texture_2d, and so on. Different texture objects can only be bound to one target at the same time. In this tutorial, target is implemented in the texture class constructor. We use gl_texture_2d.
glTexImage2D(m_textureTarget, 0, GL_RGBA, m_pImage->columns(), m_pImage->rows(), 0, GL_RGBA, GL_UNSIGNED_BYTE, m_blob.data());
The glteximage2d function is used to load the texture object data, that is, to associate the data (m_blob) in system memory with the texture object. It may be copied to video memory when the function is called, it may also be a delayed copy, which is controlled by the driver. The glteximage * function has several versions, each of which corresponds to a texture target. The first parameter of this function is the texture target, and the second parameter is the level details (level details). A Texture object may contain identical images with multiple resolutions. These images are called the mipmap layer, each layer of the mipmap has a level-of-failure index, ranging from 0 to the highest resolution. In this tutorial, there is only one mipmap layer, so the value is 0.
The third parameter is the format of the texture object. You can specify the 4-channel color rgba or only the red channel gl_red. In this tutorial, we use gl_rgba, the following two parameters are the texture height and width. By using the internal functions rows and colomns of ImageMagick, We can conveniently obtain these two values. The first parameter is the edge option of the texture. In this program, we set it to 0.
The last three Parameters specify the format, type, and memory address of the source texture data. Format specifies the color channel format, which must match the data in m_blob. The type describes the format of each color channel. In this program, it is an unsigned 8-bit number gl_unsigned_byte, the last parameter is the memory address of the texture data.
glTexParameterf(m_textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(m_textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
The preceding two functions specify the texture sampling method, which is part of the texture state. For magnification and minification (http://www.cnblogs.com/mikewolf2002/archive/2012/04/07/2436063.html, refer to this link for the two concepts), we specify linear filtering.
(Texture. cpp)
void Texture::Bind(GLenum TextureUnit)
{
glActiveTexture(TextureUnit);
glBindTexture(m_textureTarget, m_textureObj);
}
In a 3D program, there may be multiple draws. Before each draw is submitted, you may need to bind different textures for use in the shader, the above bind function enables us to easily switch between different textures. Its parameter is a texture unit.
Layout (location = 0) in vec3 position;
layout (location = 1) in vec2 TexCoord;
uniform mat4 gWVP;
out vec2 TexCoord0;
void main()
{
gl_Position = gWVP * vec4(Position, 1.0);
TexCoord0 = TexCoord;
};
This is the updated vertex shader. Here we have an input parameter texture coordinate, which is a 2D vector. In the vertex shader, we did not make any changes to the texture coordinate, but directly output the texture coordinate, however, texture coordinates are interpolated in the raster phase before the shader.
In vec2 texcoord0;
out vec4 FragColor;
uniform sampler2D gSampler;
void main()
{
FragColor = texture2D(gSampler, TexCoord0.st);
};
The above is the updated slice shader, where an input variable texcoord0 contains the texture coordinates after interpolation, and an uniform variable gsampler, which is of the sampler2d type, the application must set the Texture unit value to connect to the uniform variable so that the shader can access the texture. The returned value is the color of the sampled texture unit. In the subsequent illumination tutorial, the final pixel color is obtained by multiplying the illumination factor by the sampled color.
Vertex Vertices[4] = {
Vertex(Vector3f(-1.0f, -1.0f, 0.5773f), Vector2f(0.0f, 0.0f)),
Vertex(Vector3f(0.0f, -1.0f, -1.15475), Vector2f(0.5f, 0.0f)),
Vertex(Vector3f(1.0f, -1.0f, 0.5773f), Vector2f(1.0f, 0.0f)),
Vertex(Vector3f(0.0f, 1.0f, 0.0f), Vector2f(0.5f, 1.0f)) };
The new vertex structure includes vertex position and vertex texture coordinates.
Tutorial16.cpp
...
glEnableVertexAttribArray(1);
...
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*)12);
...
pTexture->Bind(GL_TEXTURE0);
...
glDisableVertexAttribArray(1);
There are also some code changes in the rendering loop. Because the texture coordinate attribute is added, attribute 1 is started, which is consistent with layout in the vertex shader, next, we will call glvertexattribpointer to specify the position of the texture coordinate in the vertex buffer. The texture coordinate is two floating point numbers, so the second parameter of the function is 2. Note that the fifth parameter is the size of the vertex structure, the location is the same as the texture attribute. this parameter is called 'vertex stride', which is the number of bytes between two vertices. In our vertex buffer, there are pos0, texture coords0, pos1, texture coords1, and so on. In the previous tutorial, there is only one location attribute, so this parameter is set to 0. The number of offset bytes from the starting address of the last parameter vertex structure to the texture attribute.
Before draw is called, we perform a texture binding operation. Note that the function call of the vertex attribute is disabled below. After the vertex attribute is specified, We need to disable it once.
glFrontFace(GL_CW);
glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
The above three functions set the Triangle Surface Removal Function. After this function is enabled, the triangle surface after the forward direction is removed in the PA phase (the triangle on the back is invisible ), as a result, these aspects will not be sliced into the shader, thus improving program performance. The first function specifies that the triangle vertices are clockwise, that is, when looking at the triangle from the front, its vertices are arranged clockwise, and the second function specifies to remove the back (instead of the Front ), the third parameter enables the removal function.
glUniform1i(gSampler, 0);
Set the index of the texture unit. We will use the texture in the multipart shader through the uniform variable. In the previous Code, gsampler is obtained through the glgetuniformlocation () function.
pTexture = new Texture(GL_TEXTURE_2D, "test.png");
if (!pTexture->Load()) {
return 1;
}
The above Code creates a texture object and loads it.
After the program is executed, the interface is as follows: