Developer on Alibaba Coud: Build your first app with APIs, SDKs, and tutorials on the Alibaba Cloud. Read more ＞
Original post address: http://www.game798.com/html/2007-03/2997.htmauthor: fxcarl first, I would like to say that the research on concave and convex pasters in the computer graphics field started at the end of 1970s and has nearly 30 years of history. Normalmap is only a very popular face-to-face Paster technology. Here we will introduce some of the currently used Paster technologies for games and new generation hosts such as xbox360 and playstation3. Bumpmapping: friends who have made CG pasters must know bumpmap earlier than fxcarl. This texture map is a gray scale map that describes the concave and convex of the target surface with gray changes on the surface. Therefore, this texture is black and white. If it saves space, you can even requisition the alpha channel of the texture for bump. It is worth noting that what is stored on the texture surface is the height field, that is, the height difference between each point and the original surface. Remember that the color of each point is not color, but height or a value! Therefore, any operations on this texture will affect the 3D texture of the object. You cannot use things by feeling. In the game, the algorithm used should be called fake bump mapping, false concave and convex textures. In the game, bumpmap does not change the surface of an object, but only affects the result of illumination and deceives the eyes. The simplest way is to directly overlay bumpmap on a rendered surface, resulting in a disturbance in the brightness, which makes people think it is concave and convex-this is easy to understand, turning the skillful part of a white wall into a gray area will turn into an erosion mark, which you will be better at than a small one. Computing complexity is the basic addition and subtraction. This so-called fakebumpmapping has been supported by hardware since geforce2, but it has never been widely used. However, it is interesting that bumpmap is never out of date. In later rendering algorithms, the characteristics of its storage surface height domain still play a huge role. Later, we will mention that normalmapping normal textures normalmapping practices in the gaming field are a very memorable period-the emergence of gefore3, the emergence of GPU concepts, and the emergence of hardware programmable pipelines (shaders ), normalmapping is a concave-convex texture technology. Its other name is dot3 bump mapping. The texture used to implement its control is a texture called normalmap, which is also the one we are discussing at present. Let's first talk about this figure called normalmap. What is stored in this figure is the iteration of each original surface normal, which is a bit complicated but not hard to understand. For example, in the 3D model of a game, the surface normal is like a pen standing on the desktop, vertical up. What is stored in normalmap is the direction in which the pen indicating the direction of the surface normal is "supposed"-for example, it is tilted to the left by 15 degrees. Normalmap has two main forms: normalmap called the World Space and normalmap called the cut space. The first one has no practical value in the game. The second one is the most common one. So why do we see normalmap in such a strange color? In fact, normalmap is the same as bumpmap, that is, its color is not directly related to its role. You must be familiar with the concept of spatial coordinates. In the definition of normalmap, there is a prior agreement, which is the vertical direction of the original surface, which is called the Z axis. The UV coordinate of the surface corresponds to the X axis and Y axis respectively. (To be exact, it should be called tangent and negative normal, but these two things just overlap with the UV coordinates that everyone is familiar with, so it is more common to everyone) then we know that if we take a point on the XYZ axis, the value of this point is between-1 and 1, then we can get a normal direction pointing to any direction (you do not need to explain more, we know that the normal is a vector, and the vector has two concepts: Direction and length, but for the normal, length is not required ). However, when we describe the color, the values of the three RGB channels start from scratch. However, when we try to save any normal in a texture, we will face the negative value problem. Therefore, we need to compress the normal. The method is simple. The normal projection length on each axis of XYZ is calculated by N + 1/2. In this way, all the normal values are compressed to the ranges of 0 and 1. Then we store XYZ in three RGB channels. It seems that we haven't talked about the reason why normalmap is blue. Now is the time to announce the results! First, we know that if the normal is vertical up on the surface of an object, then its
What is the XYZ coordinate? Is it 0, 0, 1? Then we compress the number according to the compression method we mentioned earlier. Add 1 to each number and divide it by 2. what we get is 0.5, 0.5, and 1, right? So we can import it to RGB, so we will get 128,128,255, right? Let's see the color in the palette! P.s. Now fxcarl and you can guess a mystery and see if fxcarl is right. Now we can see a color on normalmap. The color is 219,128,219. Then the normal direction of the surface is vertical to the right 45 degrees. How can I use Max to create a normalmap to see if fxcarl is right? If you have not understood the meaning of normalmap, or are interested in learning more about it, fxcarl will discuss it with you further. I don't know what everyone understands about the split space? Let's take an experiment and find three pens. Then, the two pens form a 90 degree mutual complement on the desktop, and the pen ends with the pen. Finally, We stacked the third Pen, pen tip up, pen end and the pen end on the two desktops on one point. Pay attention to our three pens! These three pens are the space coordinate of the point on the desktop! We must have thought that the original surface normal direction stored in our normalmap was originally a tangent space vector. That's right, it's a tangent space vector. But it seems that space cut is useless, doesn't it? We may wish to change the desktop to a basketball. Remember to maintain the relationship between the three pens, and then use the three pens together to touch the surface of the basketball. Oh, no? The advantage of the split space is that the coordinates in the cut space are valid on any surface! That is to say, the data in the shard space can be used
The complexity of 3D models is irrelevant! You can use it on any surface. Even if the surface is still moving, it won't affect normalmap's function. Do you think this space is useful? Let's go back to the beginning and we will find out what kind of results will be produced if we use normalmap In The World Space? Hey, that would cause a very embarrassing result. For example, we made a normalmap for a character, but there are two identical characters in our scenario, however, their positions and angles are different. So ...... My God ~ There must be a character's normalmap that cannot be applied! The normalmap with the split space is no problem. Well, you can rest assured that the normalmap created by Max or Maya is a space-cut normalmap, and the proof method is very simple ...... Check whether this texture is mainly composed of blue ...... OK. The following is a major play to show you how normalmap works. A prerequisite for using normalmap-pixel-by-pixel coloring. For more information, see traditional coloring pages. Traditional Games use a simplified Phong lighting model, and even Games use the ground model. The two algorithms calculate illumination only for the vertex of the object 3D model, while the large area on the 3D surface is filled with difference value. Pixel-by-pixel coloring is available only after shaders appears. Therefore, normalmapping is also a required Algorithm for shaders. The formula for calculating the diffuse reflection illumination of an object is simple ndotl-What is ndotl, which is the point product of the object's surface normal and illumination direction. Dot product is a problem of linear algebra. It is easy for art friends to write programs without further research: diffuse
= Saturate (MUL (normal, light ));. To put it simply, it is the projection of the direction vector of the light on the normal vector, and then the projection result becomes a value in the middle of the black and white. Let's also give a simple example. We put two pens on the desktop, and then one pen won't move, so that the end of a pen will be connected to the end of the first pen, and will not move, and then use the same pen tail as the center of the circle, move the pen. At this time, if we pull a line vertically (a vertical line) from the tip of a pen to the pen rod of another pen) then we can see the length of the moving pen projected by the original pen rod (that is, the pen tip of a pen connects the vertical line to the position on the pen of another pen, the length of the position along the pen bar to the end of the PEN) will get shorter and shorter. When the two pens are vertical, the projection result is zero-No illumination contribution. It is easy to understand that when the direction of the light is absolutely parallel to a surface, the surface will no longer accept the light. Now we introduce normalmap. In this case, the illumination calculation is a little different from the previous one. we replace the surface normal with the normal stored in normalmap. In this way, when we calculate the surface illumination, we will produce a much richer variation of light and shade due to the constant variation of the normal. As for why do we feel the bumps? This is the way people lie to themselves ...... In fact, there is no concave or convex place, but our eyes are too nosy. Like what flat surface of Windows buttons, we thought it was convex. Normalmap seems to be able to add details, but its shortcomings are also obvious. However, before talking about the disadvantages, we should say in advance that the advantage brought by normalmap is far greater than its disadvantages. Therefore, it is still an excellent thing. Do not be biased against it, especially before the more advanced technology we will introduce later. Do not. The biggest and most obvious drawback should be its perspective. Because normalmap only changes the illumination result on the surface and does not change the shape on the surface. Therefore, it seems that as long as it is not close to the level, normalmap will not have a perspective problem. In fact, because normalmap cannot implement its own internal occlusion, it cannot show the ups and downs on the plane. For example, we highlight a piece on a desktop and then place a toothpick on the side of the highlight piece. If you use normalmap, you will find that. Based on experience, this bulge will easily block our sight and make us unable to see the toothpick. However, normalmap does not. Therefore, we can always see what is behind the obstacle. This is a problem-that is, normalmap will play the best role only when it is perpendicular to the plane. In this way, normalmap can only be used in situations where everyone is not sensitive to the occlusion relationship, such as scenes. Instead, it cannot be used for characters, but the characters using normalmap cannot withstand close-up and zoom in, the drill angle is easy to help. Although normalmap has a huge problem that cannot be pinged, it is still far more advantageous than small obstacles, so it is worth promoting. The several emerging algorithms are actually developed by normalmapping. Therefore, as a basic concept, normalmapping is also the most valuable. P.s. A secret about normalmap. Understanding ...... In fact, normalmap does not highlight the details of the high mode from the surface of the low mode. Instead, it hides the places where the high mode is lower than the highest point! Therefore, it is very accurate to reduce the number of modules. As you can imagine, we use a lower-die gypsum model that is slightly larger than the upper-die scale to engrave the upper die. P.s. 2. the normalmap method is actually not as convenient as Max when normalmap was first invented. normalmap is calculated from bumpmap, therefore, we can use a simple algorithm to calculate normalmap from bumpmap, or even on the fly (that is, let the game engine directly read bumpmap and convert it to normalmap ). Therefore, it is a better idea to draw bump for something that is very inefficient to build a model, but can significantly increase the surface details, such as particles on the cement surface, then let the technical artist fix it. Of course, you will use Z-bursh, so I will say nothing. Description
Fxcarl estimates that the method of generating a line chart by Max is also to compare the height offset of each vertex on the high and low modulus. Then, a bumpmap is generated for the height difference on each UV element, and then it is changed from bumpmap to normalmap. Parallax ing parallax textures (because the subsequent algorithms are based on normalmap applications, it may not seem as long as normalmapping, but the content must be just as brilliant !) Parallax Paster is an enhancement algorithm of the normalmapping algorithm, which is essentially no different from normalmapping. The advantage is that you only need to add 3 HLSL statements and a control texture channel (only a few GPU commands are required, and the cost is low to ignore) to significantly increase the depth of the object surface. However, the problems in normalmap basically occur in Parallax mapping-especially when the angle of view is close to parallelism, the sense of parallelism disappears, there is no significant improvement-in fact, the problems caused by using normalmap are as lingering as the problem of the visual angle of the LCD screen. Or, according to fxcarl, Parallax
Mapping is the normalmapping that truly has practical value. It has proved that this technology is very suitable for new generation game hosts such as xbox360 and PS3 (both listed for one year and used the next generation ...... I can't take it anymore ). For example, the 360 game Sega death penalty criminal uses the same monolith engine as the pcgame fear-using Parallax mapping. Parallax mapping uses a single control texture. A normalmap. If we use ACDSee to see this normalmap, we will find that it seems to be the same as the control texture used by normalmapping. If we open the alpha channel of this normalmap, we will find the xuanjicang in it. The original alpha channel stores bumpmap corresponding to this normalmap! (That is, heightmap, that is, recording the surface height with saturation) Now insert a theoretical course. When you read the text above, you will see a word that controls the texture. This word should be explained here. Understanding Texture Control makes it very important to become an artist of the new generation. Based on the years of experience in fine arts, the understanding of Texel (texture) must be an Alpha Channel with rbg3 colors that express transparency. But in the eyes of our Renderer and programmers, it is not what our art friends see. What they see is a 4-channel vector (which can be understood as a combination of four numbers ). The value ranges from 0 to 255. This space can be used to do more things-the most common is to record the physical details of the table. Why do we need to control the texture? Fxcarl heard a friend say this two days ago: I think normalmap has nothing to do with it. It was also drawn directly. In fact, this is true, but you must know that this idea is outdated. Because normalmap is not used for coloring, it is used for generating colors more realistically. The method of painting, static frame can certainly be infinitely good. But what should I do? How can we ensure that the final coloring result is correct under different illumination relationships? The only practice is to repeat each frame. How can we achieve the most effective re-painting? We need to tell our Renderer the repainting reference to help you do some simple work. This is the role of controlling texture-telling the Renderer what you want to change in real time. In fact, there is a large range of texture control, except for normalmap. For example, NVIDIA's demo used textures to store the color variation pattern of the object surface in the sun. Compressing what the artist wants to change in real time in the texture tells the Renderer that it is a very challenging job, and of course it will get even more amazing images. Please accept texture control. It is a useful tool for artists to turn an instant precise coloring into a universal coloring tool! How does Parallax mapping increase normalmap. Let's start with the features of normalmap. Let's assume that a bulge is made on the normalmap surface. Then let's look at the angle. We will find that, in fact, this raised face is facing our line of sight ~ It will not disappear because of the gradual flattening of our perspective-this is obviously incorrect, and it is necessary to know what is behind it. Therefore, Parallax mapping is used to alleviate this problem. The specific code is not mentioned here. I will try to explain the principles in the vernacular. In fact, in order not to let us see what we shouldn't see, we should try to move the texture coordinates ...... Jump over the Texel that shouldn't be visible to the player. That is to say, based on the data provided by the height chart, the texture behind the texture at the lower position is pulled forward. This is equivalent to jumping over the theme deliberately during the Graphic Element sampling. In this way, the pixel that should not be seen by the player will disappear because of the disappearance of the graphic elements-obviously, this algorithm is not very robust, although it will refer to the player's line of sight in computing. But it is still an estimation from experience. It is gratifying that this improvement looks good for the minor details that normalmap needs to perform. Therefore, a large number of game decisions began to be adopted. In particular, it has the advantage that the cost is extremely limited, and the amount of work that needs to be increased is only for the artist to save the height map to the alpha channel. Very cost-effective. However, for technical researchers, such performance is obviously not satisfactory. Therefore, along with the idea of parallax Paster, shadermodel3.0 is used. There is an algorithm that actually physically changes the surface of an object. This is the displacement mappingdisplacement mapping displacement map that we will introduce in the next article. It is different from the previous methods, displacementmapping is a way to change the surface of an object. A technique called micropolygons (micropolygons) Tessellate (MOSAIC) is used to truly change the surface details of an object. The specific process is as follows. First, according to the screen resolution, the visible surface of the model is embedded with a micro-polygon with the same size as the final pixel. This process is called Mosaic. Then read a bump texture. Determine the height based on the gray scale of the surface. Then, move the polygon along the normal direction of the original surface based on the polygon obtained by the mosaic. Then we can determine the new normal direction for the new polygon. At this point, the surface of the object has indeed added details. In fact, we can see this technique when using zbrush. When you use zbrush, you will know that the details on the surface will become clearer only when the screen is still. Micro-polygon mosaic plays a similar role. Only the rough surface details of the polygon facing the screen are enhanced, rather than the whole model. Therefore, the performance cost is not as high as that of direct upper mode. In comparison, the displacement Paster has no flaws in the effect, but it does not have any disadvantages. First, hardware requirements are high. shadermode3.0 must be supported, because Sm3 can be used for texture operations at the vertex stage. At the same time, mosaic consumes a lot of performance. However, in terms of GPU pressure, it seems to be more reasonable (because of the higher requirements for vertex operations, the object-level operation requirements are not affected) certainly, it will be more valuable in the future dx10 uniform rendering architecture. Compared with all the concave and convex pasters we have introduced, displacement pasters are the only way to change the geometric shape of a polygon surface. Compared with the light tracing algorithm to be introduced later, the performance consumption of this algorithm is not dominant, but it is actually more reasonable. It gives the screen more special effects. What's more interesting is that it does not conflict with other pixel-based concave and convex pasters. In fact, this displacement map can be seen in new generation host games. It may not be what you want. It can be used to generate a large area of outdoor terrain in real time! This is incomparable to any other concave and convex pasters! Someone is interested in making illustrations. Please contact me to pull reliefmapping and parallaxocclusionmapping and conemapping embossed texture maps and parallax blocking textures and cone tracking textures.
This article is an English version of an article which is originally in the Chinese language on aliyun.com and is provided for information purposes only. This website makes no representation or warranty of any kind, either expressed or implied, as to the accuracy, completeness ownership or
reliability of the article or any translations thereof. If you have any concerns or complaints relating to the article, please send an email, providing a detailed description of the concern or
complaint, to email@example.com. A staff member will contact you within 5 working days. Once verified, infringing content will be removed immediately.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
and provide relevant evidence. A staff member will contact you within 5 working days.