//
Please specify the source: Http://blog.csdn.net/BonChoix, thank you ~)
Tangent spaces (Tangent space)
Switching space, same as local space, world space, is one of the many coordinate systems in 3D graphics. One of the most important uses for switching between spaces is the normal mapping (normal Mapping). Details about the normal mappings are detailed in the next article. However, it is important to understand the switching space profoundly before learning the normal mapping. So take this article to learn it, thinking that the back of the study of normal mapping, parallax mapping (Parallax Mapping), displacement Mapping and other technologies to prepare. Parallax Mapping, displacement mapping all belong to bump mapping category, and are based on normal mapping, but compared to normal mapping, the latter two methods can provide more realistic surface of the concave and convex feeling.
1. Why should there be tangent space?
There are so many coordinate systems in the 3D world, and each coordinate system has its uses, of course. For example, local space, or model space, is designed to facilitate the modeling of 3D models. In this space, we don't have to think about the many details of where the model might appear in the scene, its orientation, and so on, and focus on the model itself. In world space, the problem we care about is the position and orientation of each object in the scene, that is, how to build the scene without having to focus on the camera's viewing position and its orientation. It can be seen that the fundamental use of a coordinate system, that is, allows us to deal with different problems, with the appropriate frame of reference, to throw away irrelevant factors, thereby reducing the complexity of the problem.
Intuitively speaking, the texture coordinates in the model vertices are defined in tangent space. The normal 2-d texture coordinates contain the U, v two items, where the U-coordinate grows in the direction of the tangent axis in tangent space, and the v-coordinate increases in the direction of the bitangent axis in the tangent space. The different triangles in the model have corresponding tangent space, the tangent axis and the bitangent axis are located on the plane of the triangle, and the corresponding normals of the triangular faces, we call the coordinate system composed of the tangant axis (T), the bitangent axis (B) and the normal axis (N). That is tangent space (TBN).
As shown in the following illustration:
In the cube, each polygon has corresponding tangent space, each polygon consists of two triangles, and the texture coordinates in the two triangles are based on the corresponding tangent space.
2. Relationship between texture coordinates and position coordinates
Texture coordinates and position coordinates can be linked by tangent space. As shown in the following illustration:
The figure shows a triangle and the tangent space it is in. The position coordinates of the three vertices of the triangle are known:v0, v1,v2, and corresponding texture coordinates: (U0,v0,), (U1, V1), (U2, V2). The two edges of the defined triangle are E0 = v-V 0,e1= v0, corresponding to the texture coordinate difference: (T1, B1) = (u1–u0, v1–v0), (t2, b2) = (U2–u0, V2–v0). We have the following relational formula:
E0 =t1T+ b1B
E1 = T2T+ b2B
3. The tangent coordinate system's finding method
With the relationship between the texture coordinates and the position coordinates, we can get the tangent coordinate system of any triangle based on the known information. In a 3D model file, the position coordinates, texture coordinates, normals, and other information for all vertices are generally provided, but the information about the tangent coordinate system is missing. Tangent space is essential when applying the technology of normal mapping, so we need to get the tangent coordinate system by ourselves manually. Many libraries that read the model provide the ability to generate tangent space, but it is necessary to know how it is generated. Let's take a step at a way to derive the following tangent space:
Proceed from the relationship formula of the texture coordinate and position coordinate above, and represent it as matrix form:
Take the form of the e0,e1,t,b component, i.e.:
To move to the other side, there are:
Based on the knowledge of matrices, the inverse matrix for matrices is:
Therefore, the above formula can be further expressed as:
At this point, the data on the right side of the equals sign is known, so the matrix on the left can be obtained, resulting in the T and B axes in the tangent space. The n axis is the normal of the triangular surface, which is easily obtained.
4. Note
Now we have obtained the tangent space of the triangle based on the vertex position coordinate and the texture coordinate. One thing to note, though, is that the T-vectors and B-vectors are generally not standardized (the length is not 1). This differs from the general other coordinate system. In the local space, the world space, the angle of view space, its corresponding x, Y, Z axis length is 1, the main reason is that the coordinates in these spaces are used in the same unit of measurement. In tangent space, for textures, texture coordinates and position coordinates obviously use different units of measurement, such as changes in texture coordinates from 0 to 1, whose corresponding position coordinate changes are indeterminate.
Thus, the T and b vector lengths are generally not 1, and the T and B axes are not even perpendicular to each other in the case of textured coordinate transformations.
However, in most cases, we only need to normalize the post and T, B, n vectors without caring for their corresponding lengths. For example, in normal mapping, we use the TBN coordinate system only to convert normals from the normal map from tangent space to world space, without any association with texture coordinates, so the three axes of the TNB coordinate system we use here are all prepared.
5. Tangent space for vertices
The tangent space in the above method is based on a single triangle, and in a 3D pipeline, our processing is based on vertices. So we need to get the tangent space corresponding to the vertex. But with each triangle's switching space, the tangent space for each vertex is easy to handle, that is, for any vertex, we use the mean of the tangent space vector of all the triangles in which it resides, as the tangent space for that vertex. Familiar with the normal method, it is possible to find that this method is identical to the approach of finding vertex normals through triangular normals.
The addition of tangent space allows our vertex definitions to find corresponding changes as well. Tangent space includes TBN three vectors, in most cases, we use the tangent space of the three vectors are prepared and perpendicular to each other, so for each vertex, we just provide T, n two vectors, at run time through the Vector fork by the temporary calculation of B vector, which also saves the amount of data per vertex.
Now the definition of our vertex is as follows: [CPP] view plain copy struct Vertex {XMFLOAT3 pos; XMFLOAT3 Normal; XMFLOAT3 Tangent; XMFLOAT2 Tex; };
This is exactly the vertex format we use uniformly when we generate common geometry in the GeometryGens.h file. In the previous program, we have never used tangent members, in fact, it is for the back to learn the normal mapping and ready to drop ~
Normal mapping
1. Why normal mapping is used.
Before beginning the formal discussion of normal mapping, look at the following two images:
These two pictures are still used in a previous article of the Chinese Paladin two screenshots, two images shown in the same location, different viewing angle. In the diagram on the left, it gives the impression of a rough rock wall, depending on the texture, but there is a strong specular reflection on the right. This is obviously a bit contradictory, because the strong total reflection only appears on the surface relatively smooth surface, and from the left figure, it should be uneven. The reason for this is simple: the use of textures gives us the details of the surface of the object at the pixel level, and the model itself is made up of a finite number of vertices, so that in the pixel shader, the normals of each pixel computed by interpolation are smooth transitions, not the normal values that each pixel itself should have. Such a smooth transition of the normal after the light calculation, it is easy to cause this relatively obvious high-light reflection phenomenon.
To correct this phenomenon, the fundamental problem is to modify the normal value of the pixel so that it is consistent with the real normal, so that the light calculation will be the result of the actual approximation. There are two ways to achieve this effect: one is to increase the details of the model, that is, the number of vertices, so that you can specify more normals for the model surface, instead of relying on simple interpolation calculations in the pixel shading phase to get a more realistic effect. This approach is feasible, but with a flaw, more vertices mean a larger amount of computation, because in a vertex shader, each vertex undergoes its own various matrix transformations. So the level of detail that this approach can provide is limited and generally insufficient to meet our requirements. Another very efficient and effective method, the subject of this article: Normal Mapping.
2. Normal map and its data format
In normal mapping technology, you need to get to a texture. Unlike normal textures, each pixel (texel) in this texture is not a color value, but a normal map, which is also known as normal mapping (normal map). We know that the use of textures brings pixel-level details to vertex-based geometric models, as well as the use of normal maps, where we can get the normal values of the model surface at the pixel level, so that the normal values are directly notified to the read texture, and are no longer interpolated and worthwhile, Therefore, according to the actual needs of the artists can be flexibly set to achieve the desired realistic effect.
In the data storage format, normal maps and normal maps are no different, still in the RGB format or RGBA format. Just here, R, G, B, A are no longer the different components of the color, but the components of the three-dimensional normal vector. R, G, b represent the X, Y, and z components of a normal vector, and if it is an RGBA format, you can generally use a component to hold the height information. This altitude information is also very useful, and in many places it needs to be paired with normal values, such as Parallax Mapping (described later). Here we focus mainly on RGB components. In general, each component occupies 8 bits (unsigned), so the range of values is located at [0, 255]. In practice, however, the normalized normal vector has a total length of 1, so the components are located between [-1, 1]. Therefore, to use 8-bit to store, you need to map the [-1, 1] range to [0, 255]. The method is very simple, so that x is an arbitrary value in [-1, 1], through y = (x + 1)/2 * 255, which is the y between [0, 255]. Conversely, for the Y value read from the map, we can get the value in the range we want by reverse transformation: x = 2*y/255-1.
In HLSL, with the built-in function Sample (just like the function that read the texture before), we can get the data directly between [0, 1], so we just need to make a 2*x-1 transformation. as follows: [CPP] view plain copy normal = G_normalmap.sample (Samplertex, Pin.tex). RGB; Get normal from the normal map normal = 2 * NORMAL-1; From [0,1] to [ -1,1]
3. Tangent space to World space
By using the sample function, we get the corresponding normals on any pixel, and the next step is to use this normal for light calculation. But in fact, the normal can not be directly used in light calculation, but the corresponding spatial transformation needs to be done first. This is the purpose of the "tangent space" mentioned in the previous section. In the normal map mentioned above, the normal values stored inside are in tangent space, and the light source provided in the scene is in world space. To perform the correct lighting calculations, you need to convert the light source and normals into the same space, either uniformly in tangent space or uniformly in world space. (Here we unify the lighting calculations in world space)
For the purpose of defining normals in tangent space, here I'll add a little bit more. If the tangent space is not very understanding, according to my experience, it can be understood, that is, tangent space analogy to the 3D world in the local space. The reason for the local space is that it is convenient to focus on the model itself when making the model, without having to consider the various positions and orientations that the model may appear in the scene. In different positions, facing the same vertex in the model, its position and other information is different, if there is no local space, you need to create a separate model for each situation, it is obviously cumbersome, even impossible, and wasteful, because these different locations, the model is the same as the direction of the same. If you use local space to define the model, you can easily reuse a single model by specifying the respective world transformations for the different models in the scene. Similarly, normal mapping is also a reason. Different parts of a model, even between multiple models, may have the same characteristics of the surface, but obviously due to their position, orientation, these surfaces are different for the same normals. For example, six surfaces of a cube can have exactly the same characteristics, but the faces are different and the corresponding normals are no longer the same. Without tangent space, you will have to create a separate normal map for each surface, which is wasteful. If the normals are defined in tangent space, and for each polygon, there is a corresponding tangent space so that the same normal graph can be used for six faces.
then, for any pixel, where can I get its corresponding tangent space? In this case, the new vertex format is used, and the tangent space information is passed through the vertex at the input stage, and the tangent space corresponding to each pixel is interpolated through the tangent space of the three vertices of the triangle in the pixel coloring stage. The new vertex format is given at the end of the previous section: [CPP] View plain copy struct vertexin { float3 pos : POSITION; //Local space position     FLOAT3 normal : NORMAL; normal float3 tangent : TANGENT; //tangent float2 tex : texcoord; //texture coordiation (u,v) };
Each vertex here contains the normals, tangent vectors, in addition to the position and texture coordinates. A tangent space TBN requires three vectors. However, in order to conserve resources, only tangent and normal information is provided, and the other one-dimensional bitangent can be obtained by cross-multiplication of the two vectors at run time. The corresponding HLSL code is as follows: [CPP]