Game Engine analysis (III)

Source: Internet
Author: User
Tags pixel coloring
Original Author: Jake Simpson
Translator: Xianghai

Part 1: memory usage, special effects and APIs

Thoughts on memory usage
Let's think about how to use the 3D graphics card memory and how to use it in the future. Today, most 3D graphics cards handle 32-bit color, 8-bit red, 8-bit blue, 8-Bit green, and 8-bit transparency. The red, blue, and green colors of these combinations can contain 16 colors. 7 million colors-all the colors you can see on a monitor.

So, why does the game design guru John Carmack require 64-bit color resolution? What makes sense if we don't see the difference? It means that, for example, there are more than a dozen lighting points on the model, and the colors are different. We take the initial color of the model and calculate the illumination of a light. The color value of the model changes. Then we calculate another light, and the color value of the model is further changed. The problem here is that because the color value is only 8 bits, After calculating the four lights, the 8 bits will not be enough to give us a better resolution and performance for the final color. The insufficient resolution is caused by the quantization error, which is essentially caused by the rounding error caused by the insufficient number of digits.

You can use up Multiple Digits quickly, and all colors are cleared. Each color is 16 or 32 bits, and you have a higher resolution, so you can color the last color as appropriate. Such color depth will soon consume a large amount of storage space. We should also mention the memory and texture memory of the entire video card. What we need to say here is that each 3D video card actually only has limited memory, and these memories need to store front-end and back-end buffers, Z buffers, and all the surprising textures. The original voodoo1 video card only had 2 MB of video memory, and Riva TNT increased to 16 Mb. Then geforce and ATI rage have 32 MB memory. Currently, some geforce 2 to 4 graphics cards and radeons have 64 MB to 128 MB memory. Why is this important? Okay, let's look at some numbers...

For example, if you want to make your game look better, you want to make it run with a 32-bit screen, 1280x1024 resolution, and 32-bit Z-buffer. Well, there are 4 bytes per pixel on the screen, plus a z-buffer of 4 bytes per pixel, because they are 32 bits per pixel. We have 1280x1024 pixels-that is, 1,310,720 pixels. Based on the number of bytes in the front-end buffer and Z-buffer, the number is multiplied by 8, which is 10,485,760 bytes. Includes a backend buffer, Which is 1280x1024x12, that is, 15,728,640 bytes, or 15 MB. On a 16 MB video card, only 1 MB is left for us to store all the textures. If the original texture is true 32-bit or 4-byte width, we can store 1 MB/4 bytes per pixel = 262,144 pixels on the graphics card. This is about 4 256x256 texture pages.

It is clear that the above example shows that the old 16 MB graphics card does not have enough memory for modern Games to show its beautiful pictures. Obviously, when it draws a picture, we have to re-load the texture to the video card for each clip. In fact, the purpose of designing the AGP bus is to complete this task. However, the AGP is still slower than the swap buffer of the 3D lift card, so you will suffer some performance losses. Obviously, if the texture is reduced from 32-bit to 16-bit, you will be able to transmit twice the number of textures at a lower resolution through AGP. If your game runs at a low color resolution per pixel, you can have more Display memory to save commonly used textures (called high-speed cache textures ). But in fact, you will never be able to predict how users will set up their systems. If they have a video card that runs in high resolution and color depth, they will be more likely to set their video card that way.

Fog
Now we are talking about fog, which is a visual effect. Today, most engines can handle fog, because fog can easily fade out of the distant world, so when the model and scenario geography go beyond the observation plane and enter the visual range, you will not see them suddenly jump out of the distance. There is also a technology called body fog. This fog is not determined by the distance between an object and the camera. It is actually a real object that you can see and can be crossed, go out from the other side-when you are traversing objects, the visible degree of visual fog changes. Imagine passing through the cloud-a perfect example of the fog. Some good implementation examples of the body fog are the red fog in some checkpoints of quake III, or the gamecube version of the new Rogue squadron II Lucas arts. Some of them are the best cloud I have ever seen-just as real as you can see.

When we discuss atomization, it may be a good time to briefly introduce Alpha testing and texture Alpha mixing. When the Renderer draws a specific pixel on the screen, assuming it has passed the Z-buffer test (defined below), we may end with some alpha tests. We may find that pixels need to be drawn transparently to display things behind pixels. This means that we have to get the existing value of the pixel, mix it with our new value, and put the pixel value of the mixed result back to the original place. This is called a read-Modify-write operation, which is far more time-consuming than a normal pixel write operation.

You can use different types of mixing. These different effects are called the mixed mode. Direct Alpha mixing only adds some percentage values of the background pixel to the opposite percentage value of the new pixel. There is also an addition mix that adds some percentages of the old pixels to the new pixels of a specific number (instead of a percentage. In this way, the effect will be clearer. (The effect of Kyle's lightsaber on Jedi Knight II ).
 
Every time a vendor provides a new video card, we can get hardware support to update more complex hybrid modes to produce more dazzling results. The pixel operations provided by gf3 + 4 and the latest radeon graphics card have reached the limit.

Template shadow and deep Test
Using templates to produce shadow results makes things complicated and expensive. I will not discuss much details here (I can write a separate article). The idea is to draw a model view from the light source perspective, then, the polygon texture shape is used to generate or project to the affected object surface.

In fact, you are projected in the field of view to "drop" the light on other polygon. In the end, you get seemingly real light, and even have a visual angle in it. It is expensive to dynamically create textures and draw the same scenario multiple times.

You can use many different methods to produce shadows. This is often the case where the rendering quality is proportional to the rendering work required to produce the effects. There are so-called hard shadows or soft shadows, and the latter is better, because they imitate shadows more accurately in the real world. There are usually some "good enough" Methods favored by game developers. For more information about shadow, see the 3D assembly line of Dave Salvator.

Deep Test
Now let's start to discuss deep testing. Deep testing discards hidden pixels and over-painting begins to take effect. Over-painting is very easy-in a single pixel, you draw a pixel position several times. It is based on the number of elements in the Z (depth) Direction in 3D scenarios, and is also known as the depth complexity. If you often draw too much, for example, the magic visual effects, like heretic II, can make your compaction rate very bad. When some people on the screen cast spells on each other, some of the initial effects of heretic II's design were that they drew 40 times for each of the same tokens on the screen in a pair! Needless to say, this must be adjusted, especially the software Renderer. It cannot handle this load, except for reducing the game to something like a skiing performance. Deep testing is a technique used to determine which objects are in front of other objects at the same pixel position, so that we can avoid drawing hidden objects.

Look at the scene and think about what you don't see. In other words, what is before other scenario objects or hide other scenario objects? This is the decision made by the deep test.

I will further explain how depth can help increase the compaction rate. Imagine a very small scenario where a large number of polygon (or pixels) are located behind each other and there is no quick way between the Renderer to discard them. To sort non-Alpha polygon classification (in the Z-direction), first render the polygon closest to you and use the nearest pixel to fill the screen. So when you want to render the pixels next to them (determined by Z or deep testing), These pixels are quickly discarded, avoiding mixing steps and saving time. If you draw from the back to the front, all hidden objects will be fully drawn and overwritten by other objects. The more complex the scenario is, the worse the situation is. Therefore, deep testing is a good thing.

Anti-aliasing
Let's take a quick look at anti-aliasing. When rendering a single polygon, the 3D graphics card carefully checks the rendered polygon and softens the edges of the new polygon so that you do not get the obviously visible pixel edges. One of the two technical methods is usually used for processing. The first method is a single polygon hierarchy. You need to render the polygon from the back of the field of view to the front, so that each polygon can be properly mixed with the back of it. If you do not perform rendering in order, you will see various strange effects at last. In the second method, use a higher resolution ratio than the actual display to render the entire watermark image. Then, when you zoom out the image, the sharp sawing edge disappears. The second method has good results, but because the video card needs to render more pixels than the actual result, a large amount of memory resources and high memory bandwidth are required.

Most new graphics cards can handle this well, but there are still a variety of anti-aliasing modes available for you to choose from, so you can make a compromise between performance and quality. For a more detailed discussion of today's popular anti-aliasing technologies, see the 3D assembly line of Dave Salvator.

Vertex and pixel coloring
Before we end the discussion on rendering technology, we can quickly talk about vertex and pixel coloring, which has recently attracted a lot of attention. Vertex coloring is a way to directly use the hardware features of the video card without using APIs. For example, if the video card supports hardware T & L, you can program with DirectX or OpenGL and want your vertex to pass through T & L units (because this is completely handled by the driver, so there is no way to be sure), or you can directly use the graphics card hardware to use Vertex coloring. They allow you to perform special Encoding Based on the characteristics of the video card. Your own special encoding uses the T & L engine, and other features that the video card must provide to maximize your advantage. In fact, nvidia and ATI now provide this feature on a large number of their graphics cards.

Unfortunately, the method for indicating Vertex coloring between video cards is inconsistent. As with DirectX or OpenGL, you cannot write code for vertex coloring once to run on any graphics card. This is bad news. However, because you directly communicate with the video card hardware, it provides the greatest promise for rapid rendering of the effects that vertex coloring may generate. (Like creating good effects-you can use Vertex coloring to influence things in a way that is not provided by the API ). In fact, vertex coloring is actually bringing the 3D graphics display card back to the coding method of the game machine, directly accessing the hardware, maximizing the knowledge required by the system, rather than relying on APIs to do everything for you. Some programmers may be surprised by this encoding method, but this is the cost of progress.

It is further elaborated that vertex coloring is a program or routine for calculating and running vertex effects before the vertex is delivered to the video card rendering. You can use software on the master CPU or use Vertex coloring on the video card. Transforming the mesh for an animated model is the primary choice for vertex programs.

Pixel coloring is the routines you write. When you draw a texture, these routines are executed one by one in pixels. You can use these new routines to overwrite the mixed mode operations normally performed by the graphics card hardware. This allows you to make some very good pixel effects, such as blurred textures in the distance, addition of artillery smoke, and reflection effects in the water. Once ATI and NVIDIA are able to reach an agreement on the pixel coloring version (dx9's new advanced shadow language will help promote this goal ), i'm not surprised that DirectX and OpenGL adopt the glide method-it helps to get started, but it is not the best way to reach the limit of any graphics card. I think I will be interested in watching the future.

Last (in closing ...)
In the end, the Renderer is the most judged place for game programmers. In this industry, visual beauty is very important, so it pays for what you are doing. One of the worst factors for Renderer programmers is the speed at which the 3D graphics industry changes. One day, you are trying to make transparent images work correctly; the next day NVIDIA is doing Vertex coloring programming presentation. And the development is very fast. In general, the code written four years ago for the 3D graphics card of that era is now outdated and needs to be completely rewritten. John Carmack even described it like this. He knows the good code he wrote four years ago to make full use of the performance of the video card in that period, now it's very ordinary-so he has the desire to completely rewrite the Renderer for every new ID project. Tim Sweeney of epic agrees-here is the comment he gave me last year:

We have spent nine months replacing all the rendering code. The original unreal was designed for software rendering and later extended for hardware rendering. The next generation engine is designed as geforce and a better graphics display card, and the polygon throughput is 100 times that of the Unreal Tournament.

This requires replacing all the Renderer. Fortunately, the engine is well modularized enough, and we can keep the rest of the engine-editor, physics, AI, and network-unchanged, although we have been improving these parts in many ways.

With short articles (sidebar): API-blessing and curse
So what is API? It is an application programming interface that presents inconsistent backend with consistent frontend. For example, the 3D implementation methods of each type of 3D display card are quite different. However, they all present a consistent front end to end users or programmers, so they know that the code they write for the X 3D display card will have the same results on the Y 3D display card. Okay, no matter what the theory is. This may be a real statement about three years ago, but since then, under the guidance of NVIDIA, things have changed in the 3D graphics industry.

Today in the PC field, unless you are planning to build your own software Grating Engine, use the CPU to draw all your genie, polygon and particles-and people are still doing this. Like unreal, Age of Empires II: Age of Kings has an excellent software Renderer-otherwise you will use one of two possible graphics APIs, OpenGL or DirectX. OpenGL is a real cross-platform API (software written using this API can run on Linux, windows, and MACOs .), It has many years of history and is well-known, but it has begun to show its ancient history. About four years ago, defining an OpenGL-driven feature set was the focus of all display card vendors.

However, once the goal is achieved, there is no pre-defined roadmap for feature work direction. At this time, all graphics developers start to part with feature sets and use OpenGL extensions.
 
3dfx creates a t-buffer. NVIDIA strives for hardware transformation and illumination computing. Matrox strives to obtain the concave and convex textures. And so on. As I said before, "events in the 3D display card field have changed over the past few years. "Euphemistically illustrates all this.

In any case, another alternative API is DirectX. This is controlled by Microsoft and perfectly supported on PCs and Xbox. For obvious reasons, DirectX does not have an apple or Linux version. Microsoft controls DirectX, which is generally easier to integrate into windows.

The basic difference between OpenGL and DirectX is that the former is owned by the 'Community', while the latter is owned by Microsoft. If you want DirectX to support a new feature for your 3D display card, you need to lobby Microsoft to accept your wish and wait for a new DirectX release. For OpenGL, as the graphics card manufacturer provides drivers for 3D graphics cards, you can use OpenGL extensions to immediately obtain new features of the graphics card. This is good, but as a game developer, When you code a game, you cannot expect them to be very common. They may increase your game speed by 50%, but you cannot ask someone else to have a GeForce 3 to run your game. Well, you can do this, but if you want to stay in this industry for years, this is a pretty stupid idea.

This is a great simplification of this issue, and there are various exceptions for all of my descriptions, but the general idea here is very true. For DirectX, you can easily know exactly the features you can obtain from the display card at any given time. If a feature cannot be obtained, directX will simulate it with software (and it is not always a good thing, because sometimes it is very slow, but it is another thing ). For OpenGL, you can be closer to the features of the display card, but the cost is that you cannot determine the exact features that will be obtained.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.