Original Address http://www.cnblogs.com/flytrace/p/3387748.html

I'm a stupid man. It is often more effective to speak to a dumb person after they have understood something. Before you see it, you have the basics of lighting.

>> **world/object space Normal map**

Let's start with a normal map based on world or model coordinates (world/object space normal map). Not common, but basic.

First of all, ignore the so-called normal map generated by Photoshop that you've been to before, except for art. It's just a way of using the normal mapping principle using approximate hack. It doesn't help to understand the real process. But after reading this article, You should be able to understand the origins of this practice in Photoshop.

It is not clear how the normal map is generated, it is not correctly understood after the calculation used in shader. The appearance of normal map is to simulate the " **illumination information** " of the model with high polygon number for the model of low polygon number. Illumination information The most important of course is the angle of the light incident direction and the entry normal. The normal map is essentially the information that is recorded for this angle. The calculation of illumination is closely related to the direction of normals on a face.

We know that the models in a computer are approximated by a combination of polygon faces to approximate an object. It is not sleek. The more polygons, the closer the real object is. The normals are passed through several vertices of the face to make it worthwhile to do so. Interpolation is actually to simulate the point "correct" Normal direction, otherwise the entire surface of all points of the normal consistent, light up, we see the model exaggerated point is like a mirror splicing together. But the normal interpolation is still distorted. The higher the number of faces in the model, the less likely it is to distort. If you can subdivide it to the point where people can't see it, you don't need to interpolate.

The number of polygons is high, and the amount and memory needs to be calculated are high. The predecessor found the normal map (formerly the Bump map) This method, so that the low modulus can be approximated to enjoy the higher-modulus illumination details. The price is there, Is the need for a file that records this information. This is a common practice in the program for calculating time for storage space. 3D programs prefer to use this technique. The unit price of the storage hardware is much faster than the unit price of the computing hardware.

The graphics card includes a graphics API that is supplemented with it, and the original source of the read data is the image. So the file that records this information is saved as a picture format. 2 words behind the normal map that's it. Well, you already know half of it.

Let's figure out the other half.

Because of the small number of polygons, a surface of an area on a low modulus may be several facets of the same area on the high modulus. Comparison of high modulus and low modulus (for simplicity we abstract 2-D segments)

The curved convex curve of the upper concave represents a high modulus. The lower one is a smoother representation of the low modulus. Because of the higher modulus details, it is more natural to change direction in a certain area than the flat plate. I can't understand what I can do.

Seeing this picture, some people should feel something. It doesn't matter if you don't feel it, then come again.

Regardless of the high or low modulus, anyway, the end is to be colored. Assuming that the model has been rendered with color, now we imagine using scissors to expand the model (similar to the animal Mr Skinner process), get 2 of the same size of the skin, after all, the area will not be too much. High modulus of skin of course white body tender precision, The low-modulus skin is a little rough. Now imagine the process: gradually move the high-modulus skin above the lower-mold skin to a certain height until the horizontal overlap.

Do you feel this way now? It doesn't matter if you don't feel it, then come again.

Although the model accuracy is not the same, in any case, these 2 skins are colored at every point (the value of the interpolation). Two sheets of the same color on the skin, high modulus this skin is more real, because in the calculation of the final color information depends on the normal, the point on the high modulus is more accurate than the point on the low modulus. How do we give low modulus this skin beauty, So that it can be close to the effect of high modulus? In other words, find a way to make the soil and fertilizer circle into black fungus, qualitative change for mating is impossible, that the next life.

approach is violent. Now imagine you are using a needle, from top to bottom, pierce the high-modulus skin, and then stab the low-modulus skin. Make sure the needle is perpendicular so that it sticks to the same point. and imagine if the needle was magical, it pierced the higher-die skin, stealing some information and transmitting it to the low-die skin. Low-profile The success of the transformation into black fungus. What is this information about? Of course, the normal information. Now the high modulus of this skin is densely filled with the needle, in other words, the preservation of the information of the leak, must be point-to-point. That is, every point on this skin must be preserved. So normal maps are the same size as the original Each point in the map holds the normal information corresponding to a point at a high modulus. The actual calculation is only concerned with the normal information obtained from the map, and the normal on the low modulus is discarded.

Do you feel this way now? It doesn't matter if you don't feel it, then come again.

How do you give magic to this needle? You're not going to be able to help the geeks, Gandalf. viagra can't help you. Only mathematics can save the world ...

Why did I emphasize vertical? Not just for needles to be able to tie to the same point. Now please take this process and imagine it in. The arrows in the diagram indicate the normal direction of a point on the high modulus. How do I record this direction? Now imagine that the higher modulus and the low modulus overlap together, for the convenience of imagination, The low modulus is smaller than the high modulus. Or you can simply imagine a high-modulus face above the low-die surface, or a sphere with an inner-cut regular polyhedron. Imagine a beam of light (the equivalent of a needle) that shines from the top down, projecting the normal on the high modulus onto a low modulus.

Now you have a feeling.

The foreplay is done, now let's deal with a little bit of detail. This is a projection process. But the shadow is a 2-dimensional one. The vector is made up of three components of x, Y, Z. A point on the high modulus is projected onto the plane of the corresponding point on the lower There are only 2 components left in the projection. As we now only know the normal in the X-y projection direction, that in the direction of the Z axis? As long as we ensure that the normal is a unit vector before the projection, it is very simple z=1-x*x-y*y. So we can save space for Z. In fact, since we already know the normal direction ( The normal direction in the high modulus object space), and is also the unit, the direct preservation is possible. The projection process is just a thought experiment, and there is really no light projecting from top to bottom.

To this can be clear, "orthodox" normal map generation, is a high modulus, low modulus is indispensable. Because there is no high modulus of the normal direction, there is no low modulus, it is not known that the high modulus of a certain point of the normal corresponding to the low-modulus point.

Because the normal information of a point is saved to the corresponding pixel point on the normal map. The actual calculation is to map the normal x, y, z direction size to the RGB color space Put z-values in B. Because RGB is 8 bytes. So the high modulus of normal information stored in the pixel is to lose precision. And the previous calculation of high modulus and low-modulus corresponding points can not be completely matched, it is a simulation process. Natural normal maps are not invincible.

Now we can answer the question that the previous Photoshop generated the normal map based on the diffuse map. The actual diffuse map does not contain the normal information on the model at all. So it's a normal map based on the diffuse map is simply wrong. But why can it be applied? Imagine the high-modulus accuracy of the scary, high to the after the rendering of the high-modulus leather after the cut down, it became a picture. Before imagining the stickers on top of the mold are covered with rust. So you get a rust photo. Photoshop handles This rust photo in fact according to some algorithms (Sobel, etc.) The color value is converted to a gradient value, and the normals are approximated. Because we don't really care about the exact distribution of rust, like that, so it is possible to do so in this case, and the potholes are best suited to this approach. Photoshop's approach to getting out of high-modulus low-modulus is confusing, Lead the novice to think that the normal is from the diffuse map, or simply blocked the idea.

The normals that we use to calculate the normal map above are where we come from. If the normal direction is in world coordinates, it is called the universal normal. If it is in the object's local coordinates, it is called object space. Normal. It is easy to imagine that once the world space Normal is extracted from the map, it can be used directly and efficiently. But there is a drawback that this world space normal is fixed, if the object does not maintain the original direction and position, The original normal map was invalidated. So someone saved the object space normal. It is extracted from the map, and it needs to be multiplied by the Model-view matrix to convert to world coordinates, or to other coordinates depending on the calculation process and requirements. Object Space Normal-generated maps, objects can be rotated and shifted. It's basically satisfying. But there is one drawback. A map can only correspond to a specific model, and the model cannot have deformation (deform).

>> **tangent space Normal map**

To solve the deformation of the normal map, we can still get inspiration from these two methods. The world space Normal is directly stored in the direction of the high-mode normal in the global coordinate system. Therefore, the low modulus of the point normal can be used directly, the premise is that the low modulus of the world coordinate system is consistent with the modulus, There is no rotation at all, otherwise the normal direction will change. Object space Normal holds the high-mode direction in the model space coordinate system, and the low modulus takes out the point to take out the normal, and it needs to multiply the model-view matrix into the direction of the low-modulus world coordinate system. This means that the low-die end also needs to do an operation. Therefore, even if the low-mode arbitrary rotation is not afraid, there is model-view matrix can transform the value of the normal map between the two efficiency from high to low, flexibility from low to high. The question is, can we find another coordinate system on the modulus, So that the low modulus deformation can also be more correctly transformed normal into the world coordinate system?

Let's take a look at object space. When a low-mode rotation, because the rigid body does not deform, the equivalent of each point is multiplied by a rotation matrix R, then the relationship between the points remain unchanged. In fact, we keep the object from rotating, rotating the coordinate system of object space (x, Y, Z, three axes). The result is the same. This relationship is believed to be understandable. In other words, the normal is fixed to object space, the object remains fixed in object space, just follow the coordinate system of the movement, the rotation is OK. Now we're imagining that a point in a low modulus needs to be deformed, That can be achieved in principle by multiplying the object space coordinate system by a variant matrix T. But different points have different variants, it is not possible to have a matrix t that is suitable for this point and suitable for this point. So object Space coordinate system is not available. Will there be a single coordinate system that can have a deformation matrix that all points share? Obviously, I can't imagine.

When the deformation, the vertex relationship changes, that is, the shape of the face, the direction of the change. If there is a fixed coordinate system on the surface, when the object is deformed, moved, rotated, the coordinate system must move with the face, then a point or vector in this coordinate system (e.g. we convert the high modulus normals into this coordinate system), There is no need to change. When the entire face changes, we only need to calculate the coordinate system on the plane to the world coordinate system transformation matrix, then the definition on the surface of the point or coordinates (fixed), multiplied by the matrix to get the coordinates in the world. How this coordinate is constructed is not important to us at the moment. It is important to understand this concept. We're just looking for a local coordinate system, the point coordinates in the local coordinate system, multiplied by the local coordinate system to the world coordinate system of the transformation matrix (which is dynamically computed when the low-modulus rendering), the coordinates of the points in the local coordinate system in the world coordinate system. This fixed value stored in the normal map Normal direction) in order to make a meaningful calculation.

See here it is obvious that this practice requires thousands of different coordinate systems defined on the polygon. How many faces there are on the lower die, how many of these coordinate systems are needed. The calculation of this method is naturally larger than the object space normal map. On each side of the low modulus, To construct this coordinate system. This coordinate system term is called tangent space.

**in the object space normal map, the low-modulus object space coordinate system is coincident with the object space coordinate system in the high modulus. So there's no need to build, So a point on the low modulus can replace its normal with a high-modulus normal. The concept of a coordinate system coincidence is important. In the new method, the tangent space on the low modulus must also be tangent space with the coordinate system on the high modulus. Because a face on a low modulus may correspond to several faces on a higher modulus ( High precision), according to the new method each face has a local coordinate system, that for the low modulus of each face, high modulus because there are several faces, there will be several local coordinate system, which is certainly not possible. So the high modulus of tangent space, is low modulus. Generates a normal map, It is necessary to confirm which surfaces on the high modulus correspond to which side of the low modulus, and then the normals of the faces on the upper die are converted to the coordinates of the tangent space constructed on the surface of the low modulus. Thus, when the low modulus is deformed, that is, when the triangular face changes, its tangent space will also change, The normals stored in the map are multiplied by the tangent space of the low-modulus plane to the transformation matrix of the external coordinate system to obtain the external coordinates. Incidentally, this normal of high modulus preservation is the normal of the object space in the higher modulus. It's natural for you to see this here. You might see something when you search for an article. Convert the light into tangent space and make sure it is in the same coordinate system. But the first contact was vague. I thought to make sure tangent sapce coincident and practice, It is the trick that makes an epiphany tangent space.**

A little clearer. The curve represents a high modulus and a TBN coordinate system at p point. The segment represents a low modulus, and the M point has a T ' B ' m coordinate system. The normals at p points on the high modulus are converted to the TBN coordinates, and the angle of n is NPN '. Take out this normal at low modulus as n ", with the low-modulus PM (face You can see that the 2 angles are approximate. Therefore, the normals on the high modulus at render time are based on this coordinate system on the lower modulus. You might say, my TBN is not the actual height of the surface on the left? Don't forget that it is possible to squeeze a few high-modulus faces together to correspond to a low-modulus face, the TBN must be a " Interpolation "or" average ", in fact, will have some correlation with the low modulus, in order to best match the effect. Specific how to come up without a deep dive, can a master inform no?

When I think of this paragraph myself, tangent space's normal mapping principle is enlightened. Next we build this tangent space coordinate system.

When the face is moving, tangent space also has to follow. The vertical normals on the surface are followed, so the normal n can be used as an axis of tangent space. It is very important to note that the **vertical normal on the surface is not the normal of the interpolation, and that normal is what we need to preserve. n simply refers to the direction perpendicular to the face.**

We examine, for a triangular face, its edge v2v1,v3v1,v3v2 we are always able to determine. The edges are also set to move as they deform. So we can select an edge as the second axis of tangent space T. The third axis is simple, directly according to the cross product to B=t * N. This axis has been ordered. In fact, the selection of axes can be almost arbitrary, as long as you can make sure that you can build them every time. For example, you can select V1v3,v1v2 as the axis, N=v1v3 * v1v2. Here N is exactly the same direction as before. But in this coordinate system the V1V2, V1v3 is not perpendicular, the non-orthogonal coordinate base is not convenient in the matrix operation, but also has to be orthogonal. So we choose the first most intuitive and clearest and most convenient method.

Since the three axes are determined, the matrix that constructs the object space to tangent space is simple, and we o-tbn the T,b,n, respectively, as the x, Y, Z axis of the tangent space. Based on the three coordinate bases we construct the matrix as follows:

O-TBN =

High **modulus on the object space in a certain point of normal (not the world space, otherwise rotation is Luxiang), multiplied by this matrix, that is, tangent space in the normal direction, and then map this value to the RGB space, save as a map.** Why this matrix is so, this is an off-topic. I say briefly: The three axes of object space (1,0,0), (0,1,0), (0,0,1) multiplied by this matrix, must be exactly the axis of tangent space, The natural matrix is the top. The coordinates of the other points in object space are linear combinations of x, Y, Z, and three unit coordinates. So this matrix must be correct for other points.

In fact, in vertex shader, we can only know the information of the current vertex, the other two vertices of the triangle we do not know. But the modern shader can provide a tangent message for the vertex, Represents the tangent at the vertex. You can imagine a tangent on a football that passes through a certain point. Therefore, we will use the tangent direction of the vertex as the T-vector above. This is also the origin of tangent space called the name. You will see many articles that refer to the u,v direction of the texture. V is interpolated along each edge, so the u,v direction is the same as the direction of the edge. In fact, we already have a ready-made tangent to use now.

Now we can analyze why the tangent space normal map is blue. Because the surface of the high-modulus, because the accuracy is too high (the face is very small, and the direction of the surrounding surface is very smooth), so this polygon rendering when the computer thinks that the surface of the "bending" degree is very small, That is, each point on the surface is worth a small deviation from each other. The vertical direction of the whole face is not much worse. Therefore, in tangent space, these normals are less than the z-axis deviation. The z-axis is stored in the B-byte (blue channel) of the map. So the color of the map appears blue.

Well, now the normal values of the points on the high-modulus surface are converted to the tangent space coordinates on the lower modulus. Now we are considering the specific low-modulus rendering calculations. Suppose we figure out the matrix on a surface of a low modulus, and takes out the corresponding normal value of a point on the plane in the normal map. You now need to calculate the light. We can convert the vector of lights to tangent space for calculations. You can also convert the resulting normal vector to world space and light vectors. The result is the same. Practical considerations, You will find that the latter method is not good. Because for each point on the polygon, the normal to world space is calculated once. and the previous method, for all points on a polygon, computes the light vector to tangent space once. And then consider the vertex. Shader and fragment shader process, you will find that we can calculate the light in the vertex shader to tangent space conversion, in fragment Sader remove the normal value with the previous tangent The direction of light in space is calculated. Here is a reminder that the direction of light we get in general Verteix shader is based on world space, and the normal map holds the direction of the high modulus object space and then to tangent space, So in vertex shader, we must first convert the light to object space and then to tangent space. This ensures that the light and normals are based on the same coordinate system when the final calculation is made. That's what you do in the shader of many normal maps. , you see a function like Toojectspacedir (Lightdir), which is to convert the light to object space.

The practice may be somewhat complex. For example, some models are mirrored symmetry, the map is mirrored symmetrical, the calculation will save the other half, and so on. How to deal with the specific normal map generation software and the engine that handles it (shader). The basic principle is the above.

Tangent space Normal Map this ability to adapt to deformation, so that it can not only be applied to the original model above, and even applied to different models of severe deformation. The normal map has the ability to deviate from the original model. For example, you simulate a high-precision rough granite slab surface, The resulting normal map can be applied to the top of the cylinder model. Similar to Photoshop, it is also possible to generate a normal map directly from a photo of a granite surface. For the purpose of creating a rugged surface for tiny disturbed normals. Although this surface is not correct to restore a truly existing granite surface. But graphics is not a simulation process. , can be true enough to deceive our eyes on the line. In fact, this kind of coarse surface perturbation is only one of the applications. You find some good pictures of handling, and you will see that low-modulus monsters will be able to reflect the smooth bending of high-modulus.

The above is the principle of normal mapping. Because the principle of the application of a wide range, but also able to string a lot of knowledge points, it is very well worth figuring out. I didn't see what I could tell when I was learning the normal mapping process. So this article.

This is one of the sources cited in this article:

Http://www.gamasutra.com/view/feature/129939/messing_with_tangent_space.php?print=1

"Go" normal mapping principle