Original Author: Jake Simpson
Translator: Xianghai
Part 1: illumination and texture of 3D Environments
World Lighting
During the transformation process, we usually encounter one of the most important operations in the coordinate space called the observation space: Light computing. It is such a thing that you don't pay attention to when it works, but when it doesn't work, you are very concerned about it. There are many different lighting methods, from the simple calculation of the orientation of the polygon to the light, and add the percentage value of the light color based on the direction and distance from the light to the polygon, until a smooth edge light map is generated to overlay the basic texture. In addition, some APIs actually provide pre-built lighting methods. For example, OpenGL provides illumination Calculation for each polygon, vertex, and pixel.
In vertex illumination, You need to determine how many polygon are shared by a vertex, and calculate the mean of all polygon normal vectors that share the vertex (referred to as normal vectors ), and assign the vector to the vertex. Each vertex of a given polygon has different normal vectors, so you need to gradient or interpolation the illumination color of the polygon vertices for smooth illumination. You do not need to use this Illumination Method to view each single polygon. The advantage of this method is that hardware conversion and illumination (T & L) can often be used to help quickly complete the process. The disadvantage is that it cannot produce shadows. For example, even if the light is on the right side of the model, the left arm should be in the shadow of the body, but the model's arms are actually illuminated in the same way.
These simple methods use coloring to achieve their goals. When you draw a polygon with a plane illumination, you have the rendering (drawing) engine set the whole polygon to a specified color. This is called plane coloring. (In this method, the polygon corresponds to a light intensity. all points on the surface are displayed with the same intensity value. A plane effect is obtained during rendering and the polygon edge cannot be accurately displayed ).
For Vertex coloring (Gouraud coloring), you have the rendering engine assign a specific color to each vertex. When the pixels corresponding to the projection of each point on the polygon are drawn, the colors of These vertices are interpolated based on their distance from each vertex. (In fact, the quake III model uses this method, which is amazing ).
There is also Phong coloring. Like Gouraud coloring, it uses textures, but does not use interpolation to determine the pixel color value for each vertex color. It performs Interpolation on the normal vectors of each vertex, the same work will be done for the pixels projected by each vertex. For Gouraud coloring, you need to know which light is projected on each vertex. For Phong coloring, you need to know so much about each pixel.
It is not surprising that Phong coloring can achieve a smoother effect, because it takes a lot of time to draw each pixel Based on illumination calculation. The plane illumination processing method is fast but rough. Phong coloring is more expensive than Gouraud coloring computation, but it has the best effect and can achieve a high-gloss ("highlighted "). All of these require trade-offs in game development.
Different lights
The next step is to generate the illumination ing. You can use the second texture ing (illumination ing) to mix the existing texture to produce the illumination effect. This works well, but it is essentially a kind of canned effect pre-generated before rendering. If you use dynamic lighting (that is, lighting movement, or opening and closing without program intervention), you have to regenerate the lighting ing on each worker, modify the lighting ing according to the motion of the dynamic light. Light ing can be quickly rendered, but the memory consumption required to store these light textures is very expensive. You can use some compression techniques to make them occupy less memory space, reduce their size, or even make them monochrome (in this case there will be no color light), and so on. If you do have multiple dynamic lights in a scenario, re-generating the lighting ing will end with an expensive CPU cycle.
Many games usually use a mixed lighting method. Taking quake III as an example, lighting ing is used for scenes, and vertex lighting is used for Animation models. A pre-processed light will not produce the correct effect on the animation model-the whole polygon model gets the full illumination value of the light-and dynamic lighting will be used to produce the correct effect. The use of mixed lighting is a compromise that most people have not noticed. It usually makes the effect look "correct ". That's all about the game-do everything necessary to make the effect look "correct", but it doesn't have to be true.
Of course, all these will no longer exist in the new doom engine, but to see all the results, at least 1 GHz CPU and geforce 2 graphics cards are required. It is progress, but everything has a price.
Once the scene is switched and illuminated, we perform the cropping operation. Without going into the bloody details, the cut-off operation determines which triangles are completely within the scene (known as observing the intercept) or partially within the scene. Triangles that are completely within the scene are called details acceptance and they are processed. For triangles that are only part of the scene, the part located in the intercept body will be cropped out, and the remaining polygon part located inside the intercept body will need to be closed again, so that it is completely within the visible scenario. (For more details, refer to our 3D assembly line guide ).
After a scene is cropped, the next phase in the pipeline is the triangle generation phase (also called scanning line conversion). The scene is mapped to 2D screen coordinates. Here, the rendering (rendering) operation is completed.
Texture and MIP ing
Textures are extremely important in making 3D scenes look real. They are small images that are applied to scene areas or objects into polygon. Multiple textures consume a large amount of memory, and different technologies are available to help manage their size. Texture compression is a method that makes texture data smaller while retaining image information. Texture compression occupies less game CD space. More importantly, it occupies less memory and 3D graphics card storage space. In addition, when you first ask the video card to display the texture, the compressed (smaller) version is sent from the PC main memory to the 3D video card through the AGP interface, which is faster. Texture compression is a good thing. Next we will discuss more about texture compression.
MIP ing (Multi-texture ing)
Another technology used by the game engine to reduce texture memory and bandwidth needs is MIP ing. MIP ing technology generates multiple copies of a texture through pre-processing. Each successive copy is half the size of the previous copy. Why? To answer this question, you need to know how a 3D video card displays textures. In the worst case, you select a texture, paste it to a polygon, and output it to the screen. We say this is a one-to-one relationship. A grain (texture element) in the original texture ing map maps to a pixel in the texture ing object polygon. If the polygon you display is halved, the texture grain is displayed at intervals. In this case, there is usually no problem-but in some cases it may lead to some visual weirdness. Let's look at the bricks and walls. Assume that the original texture is a brick wall with many bricks. The Mud width between bricks is only one pixel. If you reduce the polygon by half, the grain is only applied at intervals. At this time, all the mud will suddenly disappear because they are reduced. You will only see some strange images.
By using MIP ing, You can scale the image by yourself before applying the texture to the display card, because you can pre-process the texture and do better, so that the mud will not be reduced. When a 3D video card uses a texture to draw a polygon, it detects a scaling factor and says, "You know, I want to use smaller textures instead of smaller textures, which looks better. "Here, MIP ing is for everything, and everything is for MIP ing.
Multi-texture and concave-convex ing
Single texture ing brings a lot of difference to the entire 3D realistic image, but using multiple textures can even achieve some more memorable effects. In the past, rendering was required multiple times, seriously affecting the pixel filling rate. However, many 3D acceleration cards with multiple pipelines, such as Ati's radeon, NVIDIA's geforce 2, and more advanced graphics cards, can be completed in the rendering process. When multiple textures are created, you first use a texture to draw a polygon, and then use another texture to draw a polygon transparently. This allows you to make the texture look moving, pulsating, or even shadow (as described in the lighting section ). Draw the first texture ing, and then draw a transparent all-black texture on it, resulting in one that is all woven black, but there is a transparent layer stacked over its top, this is the real-time shadow. This technology is called lighting ing (sometimes called dark ing) until the new Doom is a traditional method of level lighting in the ID engine.
Concave and convex textures are an old technology that has emerged recently. A few years ago, matrox was the first to use a variety of different forms of concave and convex textures in popular 3D games. It is to generate a texture to represent the projection of the light on the surface and the concave and convex or crack on the surface. A concave-convex texture does not move along with the light-it is designed to represent a small flaw on the surface, rather than a large concave-convex. For example, in a flight simulator, you can use a concave-convex texture to generate details like a random surface, instead of repeatedly using the same texture, which does not seem interesting at all.
A concave-convex texture produces quite obvious surface details. Although it is a brilliant trick, in a strict sense, the concave-convex texture does not change with your observation angle. Compared with the new ATI and NVIDIA graphics card, which can perform operations per pixel, the default viewing angle is no longer a powerful and fast algorithm. No matter which method, so far, no game developers have used it too much; more games can and should use concave and convex textures.
High-speed cache jitter = bad things
Texture high-speed cache is crucial for managing game engines. As with any high-speed cache, the cache hits well, but does not hit badly. If the texture is frequently swapped in and out in the graphics display card memory, this is the high-speed cache jitter of the texture. In this case, the API will usually discard each texture, and the result is that all textures will be reloaded in the next segment, which is very time-consuming and wasteful. For game players, when the API reloads the texture cache at high speed, the compaction rate will be slow.
In texture high-speed cache management, there are various techniques that minimize texture high-speed cache jitter-a decisive factor ensuring the speed of any 3D Game Engine. Texture management is a good thing-this means that only the video card needs to use the texture once, rather than repeatedly. It sounds a bit self-contradictory, but the effect is that it means to say to the video card, "Look, all these polygon use this texture, can we just load this texture once, instead of many times? "This prevents the API (or graphic driver software) from uploading textures to the video card multiple times. APIS such as OpenGL usually process texture Cache Management at high speed, which means that, according to some rules, such as the texture Access frequency, the API determines which textures are stored on the video card, which textures are stored in the primary storage. The real problem is: a) You often cannot know the exact rules that the API is using. B) You often need to draw more textures in a batch, so that the textures exceed the memory space of the video card.
Another texture high-speed cache management technology is the texture compression we discussed earlier. It is very similar that the audio waveform file is compressed into an MP3 file. Although the compression ratio cannot be reached, the texture can be compressed. The compression ratio from audio waveform files to MP3 files can reach, while the texture compression algorithm supported by most hardware is only, though, this makes a big difference. In addition, during rendering (rendering), the hardware dynamically decompress the texture only when necessary. This is great. We only erase the surface that will be used soon.
As mentioned above, another technique ensures that the Renderer requires that the video card be drawn only once for each texture. Make sure that all polygon with the same texture You Want To render (PLOT) are sent to the video card at the same time, instead of a model here, where the other model is, and then return to the original pattern theory. You can only draw the image once and transmit it once through the AGP interface. Quake III does this in its shadow system. When processing polygon, add them to an internal shadow list. Once all polygon are processed, the Renderer traverses the texture list and transmits the texture and all polygon that use the texture at the same time.
The above process is not effective when the hardware T & L of the video card is used (if supported. The ending point is that the full screen uses a large number of polygon small groups with the same texture, and all polygon use different transformation matrices. This means that more time is spent on building the hardware T & L engine of the video card, and more time is wasted. In any case, because they help to use a unified texture for the entire model, it can work effectively for the model on the actual screen. However, because many polygon tend to use the same wall texture, rendering of world scenes is often a hell. Generally, it is not so serious, because in general, the world's textures will not be so big, so the API's texture cache system will handle this for you, and keep the texture on the video card for reuse.
On game consoles, there is usually no texture Cache System (unless you write one ). On PS2, you 'd better stay away from the "one-time texture" method. On the Xbox, this is not important because it does not have graphic memory (it is a UMA architecture) and all textures are always stored in the primary memory anyway.
In fact, in today's modern pc fps games, it is the second most common bottleneck to try to transmit a large number of Textures through the AGP interface. The biggest bottleneck is the actual geometric processing. It must make things appear where it should appear. In today's 3D FPS games, the most time-consuming work is clearly the mathematical operations that calculate the correct location of each vertex in a model. If you do not keep the scene texture within the budget, the second step is to transmit a large number of Textures through the AGP interface. However, you do have the ability to influence this. By lowering the MIP level on the top layer (Do you still remember where the system continuously segments your Textures ?), You can reduce the texture size that the system is trying to send to the video card by half. Your visual quality may decline-especially in compelling film clips-but your attention rate has increased. This method is especially helpful for online games. In fact, the two games Soldier of Fortune II and Jedi Knight II: outcast, which are designed right-handed graphics cards, are not mainstream graphics on the market. To view their textures at the maximum size, your 3D graphics card requires at least MB of memory. Both products are designed for the future.
The above is Part 1. In the following sections, we will introduce many topics, including memory management, fog effect, deep testing, anti-aliasing, vertex coloring, and API.