Original Author: Jake Simpson
Translator: Xianghai
Part 1: physical, motion, and effect
World Building
When creating a game that contains any 3D components, you often try to build a 3D environment that will generate game actions in it. Unknown game developers provide a way to establish such an environment, which is easy to modify, efficient, and has a low number of polygon, making it easy to render and use physics for games. Simple, right? What do I do with my left hand? In this case, what do I do to my left hand? Yes. Good.
Although there are many 3D structural programs, from CAD/CAM programs to 3D Studio MAX, building the game world is different from building internal or external models. You have a problem with the number of triangles-any given Renderer can only render so many polygon at a time, which is never enough for a genius level designer. Without knowing this, you can only store a predetermined number of polygon in each level, so even if your Renderer can process 250,000 polygon in the field of view, even if you can only store 500,000 polygon in a reasonable amount of space, it depends on how you handle it. In the end, your level value is as small as two rooms. Not good.
In any way, developers need to propose a creative tool-it is best to be flexible enough to allow various things required by the game engine-for example, placing objects in the world and previewing checkpoints before entering the game, and accurate illumination preview. These capabilities allow game developers to see how the level will look in the game before they take three hours to pre-process the level to generate a 'Engine digested 'format. Developers need data about checkpoints, polygon quantity, and grid quantity. They need an appropriate and friendly way to make the world have a texture background image, easy to access polygon number reduction tools, and so on. This list can be further listed.
It is possible to find this function in an existing tool. Many developers use Max or Maya to build their checkpoints, but even if 3D Max needs to have task-specific extensions for its functionality to efficiently complete level building. You may even use a level building tool, such as qeradient (SEE), and reprocess its output into a format that your engine can interpret.
Can't you see it? Don't bother...
Looking back at the BSP (Binary split) tree we discussed in the first section, you may have heard that the term "potential visual set (PVs)" is being discussed everywhere. Both have the same goal and do not explore the complicated mathematics involved. It is a way to break down the world into the smallest subset of the walls that can be seen from any given position in the world. In implementation, they only return what you can see, rather than those hidden behind the possibly blocked walls. You can imagine the benefits this brings to the software Renderer. Each pixel of rendering (which may be the case) is extremely important. They also return the walls in the order from the back to the front, which is very convenient during rendering, because you can determine the actual position of an object in the rendering order.
In general, BSP trees are not popular recently, because they are odd, and because of the pixel throughput we get from today's 3D display cards, coupled with the Z buffer pixel test, BSP is often a Redundant process. They are convenient in calculating your exact location in the world and the geometric objects around you, but there are often better and more intuitive ways to store this information than the BSP tree.
The potential visual set is as good as it sounds. It is such a method that, at any given time, given your location in the world, it determines which surfaces of the world and which objects are actually visible. In this case, objects are often removed before rendering, and they are also removed to reduce AI and animation processing. After all, if you can't see them, why bother with it. Most of this is really not important if a non-player role (NPC) is playing an animation or even thinking about the AI that is running it.
Game physics
Since we have gained the world's structure in memory, we must prevent our roles from falling out of it and handle floors, slopes, walls, doors, and mobile platforms. In addition, we must correctly handle gravity, speed changes, inertia, and collision with other objects placed in the world. This is viewed as game physics. Before further discussions, I want to eliminate a myth here. Anytime you see physics in the world, or anyone declaring "real physics" in a complex game environment, it's good, it's BS. Over 80% of the effort to build an efficient game physical system is spent on simplifying the real equation used to process objects in the world. Even then, you often ignore what is 'true' and create something 'interesting. After all, this is the goal.
Often gamers will ignore the real world's Newton physics and play their own, more interesting real version. For example, in quakeii, you can speed up from 0 to 35 mph immediately and stop quickly. There is no friction, and the slope does not provide the same type of gravity issues as the real slope provides. The body does not have the gravity they should do on all joints-you cannot see the body falling on the table or edge as in real life-and the gravity itself may even be variable. In the real world, the spacecraft in space is not as practical as the War II Flight fighters are operating on their surface. In the air, all are force and reaction, and force acts around the weight point, and so on. Unlike Luke Skywalker in X-wing. This is even more interesting!
As game developers, whatever we do, we need to be able to detect walls, detect floors, and handle collisions with other objects in the world. These are essential to modern gaming engines-what we decide to do further depends on our and our gaming needs.
Performance System
The vast majority of game engine construction today has some effect generator, which allows us to show all the cute eye-catching things that insightful gamers expect. However, what happens behind the scenes of the results system can greatly influence the compaction rate, so this is something we need to be particularly concerned about. Now we have a great 3D display card that we can transmit a large number of triangles to them, and they still require more triangles. Not always. In heretic II, Raven encountered a very serious over-rendering problem using its cute software rendering mode due to their pretty spell effects. Recall that when you draw the same pixel more than once on the screen, over-painting will happen. When you have many effects in progress, you have many triangles by their nature, and multiple triangles may stack on each other. The result is that you have many repeated pixels. With Alpha, this means you have to read the old pixel and mix it with the new Pixel before re-drawing, which consumes more CPU time.
Some of the effects of heretic II can be used to illustrate this. We have repeatedly painted the entire Screen for forty times in a pair. Surprised, right? Therefore, they implement a system sampling rate of the past 30 Gb/s in the effect system. If the speed starts to slow down, it will automatically reduce the number of triangles drawn for any given effect. In this way, the main work is completed, and the compaction rate is maintained, but some results seem ugly.
In either case, because most of today's effects tend to use a large number of small particles to simulate fire and smoke, and so on, you have to deal with a lot of triangles in each segment of the Results code. You have to move them from the second to the next, decide if they are done, and even apply physics to them to make them rebound properly on the floor. This is expensive on PCs, so you must have some practical limitations on what you can do. For example, it may be good to use a pixel particle to generate fire, but when you do this, don't expect to do more on the screen.
Particles are defined as plotting objects with very small locations and speeds in their own worlds. They are different from those with a direction. Large particles use these elves, for example, the smoke of a group. They rotate, scale, and change their transparency level automatically and typically for cameras, so they can fade out over time. We can easily see a large number of particles, but we limit the number of genie-although the real difference between the two is now blurred. In the future, especially after DX9 and a more advanced graphic hardware surface, we may see that more people use the shader process to produce similar or better effects than particle systems, creates amazing animation effects.
When talking about the effect system, you may have heard of the word 'graphic origin. An image is the lowest level of physical performance that your performance system will handle. Further, a triangle is a source image. That is what most engines ultimately draw at the bottom-a large number of triangles. When you go up the system, your original graph definition changes. For example, top-level game programmers do not want to consider processing individual triangles. He just wanted to say, "This effect happens here" and let the system process it in a black box. Therefore, for him, the original principle is 'let us continue in the world for so long to use this gravity to produce a bunch of particles '. In the performance system, it may think of every single effect that was originally produced by it at that time, A group of triangles that follow the same physics rules-then it transmits all these separate triangles to the Renderer for rendering, so at the Renderer level, the graph is originally a separate triangle. A little confused, but you know the general idea.
The above is the fifth part. The next part is about the sound system, and a variety of different audio APIs, 3D audio effects, blocking and Barrier processing, the impact of various materials on the sound, audio mixing, and so on.