[OpenGL] projective Texture Mapping)

Source: Internet
Author: User

I. Introduction to Texture projection ing

Projection texture ing is used to map a texture to an object, just like projection a slide onto a wall.

Projection texture ing is often used in some shadow algorithms and volume rendering algorithms. Strictly speaking, the projection texture ing technology is usually used as long as it involves "real-time texture and space vertex ing.

The following is an example of Texture projection ing:

Figure 1 Texture projection ing

Ii. Texture projection ing advantages

1. Maps textures to spatial vertices in real time without generating texture coordinates in the modeling software.

2. When projection ing is used, texture distortion can be effectively avoided.

The following figure shows the comparison between projection texture ing and normal texture textures:

Figure 2 Comparison Between projection spatial interpolation and real Spatial Interpolation


Iii. Principles and implementation steps

To project a texture to a surface, we need to determine the texture coordinates based on the position of the surface point and the projection source. We can take the projection source camera into a camera located somewhere in the scene. Just like a camera defined in OpenGL, a coordinate system like ours (not called a projection coordinate system): The Center is located at the position of the projection source, view matrix V converts coordinates to the projection coordinate system. Perspective Projection Matrix P converts a visual object to a cube of 2, and its center is located at the origin of the projection coordinate system. These two matrices are superimposed together, plus the scaling and moving transformation matrices (used to change the cube with the size of 2 to the size of 1, and the center is located at (0.5, 0.5, 0.5 )). The following transformation matrix is obtained:

Figure 3 Projection texture ing Coordinate Transformation Matrix

Note that the coordinates are not normalized. before accessing the texture, You need to divide each component by its W component.

The above matrices P and V can all be obtained through the camera position and photography angle.


Projection texture ing usually includes two aspects: the texture coordinate Allocation Method to the top point and the calculation method of the raster phase. We often regard texture ing as applying texture images to elements. This is true, but it involves a lot of mathematical calculations.

Grating details

During projection texture ing, we use homogeneous (homogeneous) texture coordinates, or coordinates in the projection space. For non-projection texture ing, we use real texture coordinates, or the coordinates in real space. For 2D Texture projection ing, the homogeneous coordinates (s, t, q) of the three components are interpolated on the elements and each segment, after interpolation, the homogeneous coordinates are projected to the real 2D texture coordinates (S/Q, T/q), and then the texture is accessed. For non-projected 2D Texture ing, the two-component real coordinates (S, T) are interpolated on the element and then directly used to access the texture image.

This leads to the difference in the interpolation effect in the preceding figure.


Allocate homogeneous texture coordinates

The above discussion on Rasterization is based on the assumption that each vertex has been allocated a homogeneous texture coordinate. As an application writer, this is what we need to do. The following describes how to implement it in OpenGL.

Imagine that the texture is projected from a projector to a scene. This Projector and camera have many common attributes: it has a view Transformation (converting the world space coordinates to the projector space or eye space ), projection Transformation (ing the projector space object to the shear coordinate ). Finally, we implement the scaling and offset operations for range ing. For cameras, X and Y are mapped based on the current view settings, and Z is mapped based on the current depth range. For projection texture ing, range ing generally maps each coordinate to the interval [0, 1.

Two kinds of transformations are compared: The application of vertex coordinates is used to calculate the transformation of the position coordinate of window space and the transformation of the projection texture coordinate.


Figure 4 camera transformation applied to the World Space vertex coordinate used to calculate the window space coordinate is very similar to the projection transformation used to calculate the projection texture coordinate on the World Space vertex Coordinate


One important aspect of texture coordinate allocation is the ability to generate texture coordinates Using OpenGL. It generates texture coordinates based on other vertex attributes. For example, gl_object_linear and gl_eye_linear are used to generate texture coordinates for vertex coordinates in object space and eyes pace respectively. Other methods use the property of other subsidies. Gl_speremap and gl_reflection_map use the eye space vertex coordinates and normal. Gl_normal_map generates texture coordinates based on the normal vector.

OpenGL can use different texture coordinate generation methods for each vertex direction. For example, you can use gl_sphere_map in S direction, gl_reflection_map in T, gl_object_linear in R, and gl_eye_linear in Q. However, this feature is not very useful. We usually use the same method.


The following uses an actual example to illustrate how to implement projection texture ing (OpenGL + glsl). Its effect and Figure 1 at the beginning are shown. A teapot is drawn in the figure, and a texture image is projected from one direction to the teapot.

Vertex shader projtex.:

# Version 400

Layout (location = 0) in vec3 vertexposition; layout (location = 1) in vec3 vertexnormal; out vec3 eyenormal; // normal in eye coordinatesout vec4 eyeposition; // position in eye serving vec4 projtexcoord; Uniform mat4 projectormatrix; // matrix used to calculate projection texture ing: Uniform vec3 worldcameraposition; Uniform mat4 modelviewmatrix; // Model View matrix: Uniform mat4 modelmatrix; // model transformation matrix uniform mat3 normalmatrix; // normal transformation matrix uniform mat4 projectionmatrix; // projection transformation matrix uniform mat4 MVP; // Model View transformation matrix void main () {vec4 pos4 = vec4 (vertexposition, 1.0); eyenormal = normalize (normalmatrix * vertexnormal); eyeposition = modelviewmatrix * pos4; projtexcoord = projectormatrix * (modelmatrix * pos4 ); gl_position = MVP * pos4 ;}

The first two lines of the above Code are to convert the normal and vertex coordinates to the eye space.

Then, the projection texture coordinates are calculated: first, the coordinates in the object space are transformed to the world space, and then the Texture projection coordinates are calculated through the left multiplication Texture projection matrix transformation.

Finally, the built-in gl_position is used to calculate the coordinate of the shear plane.

The part coloring tool projtex. FS (only a part is listed below ):

# Version 400in vec3 eyenormal; // normal in eye coordinatesin vec4 eyeposition; // position in eye serving vec4 projtexcoord; Uniform sampler2d projectortex; layout (location = 0) Out vec4 fragcolor; void main () {vec3 color = phongmodel (vec3 (eyeposition), eyenormal); vec4 projtexcolor = vec4 (0.0); If (projtexcoord. z> 0.0) // textureproj: The texture coordinate component is divided by the last component projtexcolor = textureproj (projectortex, projtexcoord); fragcolor = vec4 (color, 1.0) + projtexcolor * 0.5 ;}

The phongmodel function is used to calculate the phong illumination model.

Note that the textureproj function is used for projection texture access. The texture coordinate component is divided by the last component before accessing the texture.

In this example, we assume that the projector is located at (2.0, 5.0, 5.0), orientation is (-2.0,-4.0, 0.0), direction (0.0, 1.0, 0.0) is up. We use an external library GLM library to calculate various transformation matrices based on the information of the projector. The Code is as follows:

    vec3 projPos = vec3(2.0f,5.0f,5.0f);    vec3 projAt = vec3(-2.0f,-4.0f,0.0f);    vec3 projUp = vec3(0.0f,1.0f,0.0f);    mat4 projView = glm::lookAt(projPos, projAt, projUp);    mat4 projProj = glm::perspective(30.0f, 1.0f, 0.2f, 1000.0f);    mat4 projScaleTrans = glm::translate(vec3(0.5f)) * glm::scale(vec3(0.5f));    prog.setUniform("ProjectorMatrix", projScaleTrans * projProj * projView);

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.