The implementation of Normalmap marks a correct and in-depth understanding of the various aspects of the rendering pipeline as well as the matrix changes. Here is a record of the learning process and the many details about Normalmap.
When I first wanted to implement the NORMALMAP program, I looked at Real time Rendering and the orange Peel book. In this book, beginning with texture mapping, it is mentioned that normal map is a kind of bump map, which uses the values recorded in the texture to interfere with the normal parameters in the illumination equation, in order to change the illumination result and simulate the effect of the surface fine texture. However, in normal Map is a normal vector saved, directly use. However, for this kind of technology, only used in the surface of the uneven effect of objects, such as wrinkles, orange peel folds. But to simulate a rotating planet with a huge mountain range, when the mountain position rotates to the edge of the planet, it still sees the edge of a smooth ball and does not see the protruding part.
Considering the illumination equation, it is necessary to have the direction of observation, the direction of light, and the related operations of normals. This requires that the three vectors must be in the same coordinate system, otherwise the related operation will not be established. Consider various coordinate systems at this time to see which one is more consistent.
If the normals of the records in normal map are relative to the world coordinate system. This makes it easy to calculate (because the line of sight and the direction of light do not have to be converted), but for objects that use normal map, any rigid body changes are applied to the vector recorded in the normal map. Also, for different objects that use the same normal map, the position of the two objects is different, so it is not a good deal to do two operations on the same normal map.
If the normal map is in the object space, then it is sufficient to make various rigid body changes to the object, but it is still not possible to make non-rigid changes to the object. Also, there are still different parts of objects that use the same normal map, and extra processing is needed.
and notice that because reading the vector data in normal map is in frangment shader, the operation of the various data in normal map is pixel-level (because it is rasterized), the computational amount is very large. Therefore, it is unwise to transform the coordinate system of the data in normal map.
Therefore, we introduce a new coordinate system, which is relative to the surface of the object, so that the use of different patches, using a different coordinate system for conversion can be. Because this coordinate system contains a vector that becomes tangent, it is often called tangent space (Tangent spaces). The same is true when the changes are related to the principle of changing objects from the world coordinate system to the object coordinate system, so their matrices are very similar.
Once a coordinate system is involved, there must be two places to be determined, one, and where is the origin of the coordinate system relative to the other coordinate system? Two, what is the value of the three base vectors of the coordinate system relative to the other coordinate system? If you don't understand these two questions, please read this post.
As mentioned above, tangent space is the coordinate system relative to the surface of the object, and the point of defining the patch is defined in the object coordinate system, then the other coordinate system mentioned above is relative to the object coordinate system.
For the first question, we investigate the intention of introducing tangent space (coordinate system) to correctly interpret the vectors in normal map in order to cope with any change in the object. Because the change to the object is the point of the object to perform various operations. So, if the coordinate system is bound to the vertex coordinates of the object, then whatever changes are made to the object will not affect the interpretation of the normal map. Thus, the origin of tangent space is relative to the vertex of the object. Also in the orange Peel book, any incoming (x, y,z) will be converted to (0, 0, 0). That is, the coordinate system is one of each vertex, and the vertex is the origin point. Further consideration, if the patch and normal map texture are continuum, that is, we can know the coordinates of any point inside the patch, and its corresponding normal map texture coordinates, so for this point we can establish a tangent space coordinate system, get the normal The value of the map texture is a coordinate in this coordinate system. In this sense, I can think of each texel in a coordinate system, the origin of the coordinate system is the center of the Texel.
That is, if you paste a 1024*1024 normal map onto a square, there is 1024*1024 tangent space on this square. However, in the actual calculation process, we use interpolation techniques to avoid solving so many tangent spaces. Specifically, it will be said later.
So the second question, this is relatively simple. We know that patches consist of dots, a triangular patch with three vertices. The point is a normal vector of N, and then for this point, we can also define a tangent vector t, so that, using the cross between the two vectors, you can get another constant amount B, usually B is called the deputy normal vector. Because normal n is usually a polygon perpendicular, the tangent direction is generally on the patch. In this way, three vectors make up a coordinate system. Is this process familiar? Right is the process of solving a matrix that transforms from a world coordinate system to a camera coordinate system. So, the matrix from the object coordinate system to the tangent space coordinate system is as follows:
So, is it possible to have any tangent vectors? Theoretically, it is possible, but requires that you give each texel a tangent space. Therefore, we consider from the calculation, the choice of tangent vector is to try to make a patch vertex tangent direction consistent, such as a triangle, three vertices tangent direction as far as possible to point to a direction, and on the patch. You can tell us how this is calculated, and note that the normals of the patches in the graph are perpendicular to the screen:
The left column is the result of a consistent tangent vector, and the right side is a relatively large difference. To explain here, the reason we calculate the coordinate system is to convert the light direction and the line of sight to the tangent space, and then use the light equation to calculate. If we choose the tangent vectors to be consistent, the direction of the light in different tangent space coordinate systems for different vertices (or different pixels, they are one by one corresponding) has a very small difference in line of sight. Thus, with this feature, we do not have to calculate the coordinate system of the Texel one by one (which is not itself possible) and use the interpolation function of the programmable pipelining to complete the calculation. For example, a triangle mapped by normal map, we can calculate the tangent space coordinate system of three vertices only in vertex shader, so as to get the light direction of the tangent space of three vertices, the value of the line of sight direction, then, Directly as a varying variable is dropped to fragment shader, of course, the pipeline will interpolate these varying variables, that is, two of the comparison, we are seeking to be as equal as possible.
As can be seen from the above, it is more efficient to calculate the light equation in tangent space than in a camera or world coordinate system. Because, however, reading the data in normal map can only be used in fragment shader, converting a vector in normal map to a world coordinate system or a camera coordinate system must be done using a matrix, noting that the fragment The shader is pixel-wise, and if you use normal Map for a square that occupies 300*300 pixels on the screen, then 90,000 matrix operations will be performed in fragment shader. It's obviously inefficient.
If you use the introduction method of this article, for only four vertices of the square, only the calculation of four matrix conversion, the remaining is the interpolation calculation. Therefore, the individual thought that the conversion to the tangent space is very cost-effective.
This is a lot of exploration, from the above analysis can be seen, we not only better understand the assembly line comes with the coordinate system, but also learn how to create a coordinate system and how to use these coordinate system efficiently. Also, notice a feature that has been ignored for pipelining: for vertex shader the number of executions is related to the number of vertices, after execution, the varying variable can be understood as the data bound by the vertex, and the various varying variables will be interpolated into Fragmet shader. The number of executions of the fragment shader and the pixels that are mapped to the multi-deformation on the screen are positively correlated.
The specific code will be uploaded later, or you can refer to the Bump Map section of the orange Peel book.
If there are those places in the text that are not correct, we would also like to point out. I am also a beginner of graphics, hope to get everyone's guidance.
Detailed analysis of Normalmap principle