Game Engine analysis (I)

Source: Internet
Author: User
Original Author: Jake Simpson
Translator: Xianghai

Part 1: Introduction to game engines, rendering and constructing 3D worlds

Introduction
We have gone a long way since the doom game era. Doom is not just a great game, it also creates a new game programming model: Game "engine ". This modular, scalable, and scalable design concept allows game players and programmers to go deep into the core of the game and create new games with new models, scenes, and sounds, or add new things to the existing game materials. A large number of new games are developed based on existing game engines, and most of them are based on the quake engine of ID. These games include counter strike, Team fortress, Tac ops, strike force, and quake soccer. Tac ops and strike force both use the Unreal Tournament engine. In fact, the "Game Engine" has become a standard term for communication between game players, but where does the engine end and where does the game start? Pixel rendering, sound playback, monster thinking, and triggering of game events. What is behind all these scenes in the game? If you have thought about these issues and want to know more about what drives the game, this article will tell you this. This article analyzes the core of the game engine in multiple parts, especially the quake engine, because my recently-working company ravensoftware has developed multiple games based on the quake engine, this includes the famous soldier of fortune.

Start
Let's first look at the main difference between a game engine and the game itself. Many people confuse the game engine with the whole game. This is a bit like confusing a car engine with the whole car. You can take out the engine from the car, build another shell, and use the engine again. The game is also like that. A game engine is defined as all non-game-specific technologies. The game part is all the content called 'asset '(models, animations, sounds, AI and physics) and the program code that is especially needed to make the game run or control how it runs, for example, AI.

For those who have seen the quake game structure, the game engine is quake. EXE, while the game is qagame. DLL and cgame. DLL. It doesn't matter if you don't know what it means. Before someone explains it to me, I don't know what it means. But you will fully understand what it means. This game engine guide is divided into eleven parts. Yes. In terms of quantity, there are eleven parts in total! Each part contains about 3000 words. Now let's start our exploration in the first part. Let's dive into the kernel of the game we are playing. Here we will learn some basic things to pave the way for the subsequent chapters...

Renderer
Let's start with the game engine design from the Renderer. We will discuss these issues from the perspective of game developers (the background of the author. In fact, in each section of this article, we will often discuss it from the perspective of game developers, and let you think about the problem as we do!

What is a Renderer? Why is it so important? Well, you won't see anything without it. It visualizes game scenarios and allows players/audiences to see the scenes, so that players can make appropriate decisions based on what they see on the screen. Although our discussion below may make new users feel a little scared, ignore it first. What does the Renderer do? Why is it necessary? We will explain these important issues.

When constructing a game engine, the first thing you usually want to do is to build a Renderer. Because if you can't see anything-how do you know that your program code is working? More than 50% of the CPU processing time is spent on the Renderer, which is usually the most demanding part for game developers. If we perform poorly in this part, things will become very bad. Our program technology, our games and our company will become industry jokes within 10 days. It is also the place where we rely most on external vendors and power, where they will handle the maximum potential operation goals. Building a Renderer is not as attractive as it sounds (as it is), but without a good Renderer, the game may never rank among the top 10.

Nowadays, generating pixels on the screen involves 3D accelerator cards, APIs, 3D space mathematics, and understanding how 3D hardware works. The same type of knowledge is required for host (game host) games, but at least for hosts, you do not have to try to hit a moving target. Because the hardware configuration of a host is a fixed "time snapshot", unlike that of a PC, the hardware configuration of a host remains unchanged during its lifecycle.

In a general sense, the Renderer's job is to create the visual highlights of the game. In fact, achieving this goal requires a lot of skills. 3D graphics are essentially an art that creates the maximum effect with minimal effort, because additional 3D processing is extremely expensive in terms of processor time and memory bandwidth. It is also a budget. You need to figure out where you want to spend processor time and where you 'd rather save some time to achieve the best overall effect. Next we will introduce some tools in this area and how to use them to make the game engine work better.

Build a 3D world
Recently, when I talked to a person who has been working on Computer Graphics for several years, she confided to me that when she first saw real-time 3D computer image manipulation, she does not know how this is implemented or how computers can store 3D images. Today, this may be true for ordinary people on the street, even if they often play PC games, game consoles, or arcade games.

Next we will discuss some details about creating a 3D world from the perspective of game designers. You should also take a look at the introduction to 3D pipelines written by Dave Salvator. in order to have a general understanding of the main process of 3D image generation.

3D objects are stored as a series of points (called vertices) in the 3D world and are correlated with each other. Therefore, Computers know how to draw lines or fill surfaces between these points in the world. A cube consists of eight points, each of which has a point. A cube has six surfaces, representing each of them. This is the basis for storing 3D objects. For some complex 3D objects, such as a quake level, there will be thousands (sometimes hundreds of thousands) of vertices and thousands of polygon surfaces.

See the line-Frame representation (note: the original text here has an image ). Essentially similar to the Cube example above, it is only a complex scenario composed of many small polygon. How models and the world are stored is a part of the Renderer, not an application/game part. The game logic does not need to know how objects are represented in the internal storage, nor how the Renderer will display them. The game only needs to know that the Renderer will use the correct field of view to represent the object and display the correct model in the correct animation.

In a good engine, the Renderer should be completely replaced by a new Renderer, and there is no need to change the game's line of code. Many cross-platform engines, and many self-developed game machine engines, such as the Unreal Engine, can be replaced by the gamecube Renderer.

Let's take a look at the internal representation method-in addition to using the coordinate system, there are other methods that can represent space points in computer memory. In mathematics, you can use an equation to describe a straight line or curve and obtain a polygon. Almost all 3D display cards use polygon as their final rendering elements. An element is the lowest level of rendering units you can use on any display card. Almost all hardware uses polygon (triangles) of three vertices ). The next generation of nvidia and ATI graphics cards allow you to render in mathematical form (known as high-order surfaces), but since this is not the standard for all graphics cards, you cannot rely on it as a rendering strategy.

From a computing point of view, this is usually expensive, but it is often the basis of new experimental technologies, such as surface rendering or softening of sharp edges of objects. We will further introduce these high-order surfaces in the section below.

Remove Overview
The problem arises. I now have a world described by hundreds of thousands of vertices/polygon. From the first person's perspective, I am on the side of our 3D world. Some polygon in the world can be seen in the field of view, while others are invisible because some objects, such as visible walls, block them. Even the best game coders cannot process 300,000 triangles in one field of view on the current 3D graphics card and maintain 60 FPS (a major goal ). The video card cannot process it, so we must write some code to remove invisible polygon before handing them over to the video card. This process is called elimination.

There are many different removal methods. Before learning more about this, let's discuss why the graphic display card cannot process ultra-high polygon. I mean, isn't the latest graphics card capable of processing millions of polygons per second? Shouldn't it be able to handle it? First, you must understand the declared polygon generation rate and the real-world polygon generation rate in the market. The declared polygon generation rate is the polygon generation rate theoretically achieved by the graphic display card.

If all polygon are on the screen, with the same texture and size, the application that is transmitting polygon to the display card will not do anything except the polygon, the number of polygon that the video card can process is the number presented by the graphics chip manufacturer.

However, in real game scenarios, applications often do many other things in the background-3D polygon transformation, light computing, copying a large number of textures to the graphics card memory, and so on. Not only does the texture need to be sent to the display card, but also the details of each polygon. Some newer video cards allow you to store model/World geometric details in the video card itself, but this may be expensive and will consume space that can be used normally by the texture, so you 'd better be sure that each worker is using the vertices of these models, otherwise you are just wasting storage space on the display card. Let's talk about it. What's important is that when you actually use a video card, you don't have to be able to meet the metrics you see in the video box. If you have a relatively slow CPU, or if there is not enough memory, this difference is especially true.

Basic Elimination Method
The simplest removal method is to divide the world into regions. Each region has a list of other visible regions. In this way, you only need to display the visible part for any given point. How to generate a list of visible fields is a tip. In addition, there are many ways to generate a list of visible regions, such as BSP trees and peat holes.

Certainly, you have heard of the term BSP when talking about doom or quake. It indicates binary split space.

BSP is a way to divide the world into small areas, through the organization of the world's polygon, it is easy to determine which areas are visible and which are invisible-thus facilitating software-based Renderer that doesn't want to do too much painting work. It also gives you an effective way to know where you are in the world.

Each region (or room) is built with its own model in a peat hole-based engine (first introduced to the game world by a prey project canceled by 3D realms, you can see another area through the door (or webshell) in each area. The Renderer draws each area separately as an independent scenario. This is the general principle of it. This is an essential and important part of any Renderer.

Although some of these technologies are categorized under "Occlusion Removal", they all have the same purpose: to eliminate unnecessary work as soon as possible. For an FPS game (first-person shooting game), there are often many triangles in the field of view, and it is absolutely necessary for gamers to take control of the field of view and discard or remove invisible triangles. This is also true for space simulation. You can see that it is very far away-removing things beyond the visual range is very important. For games with limited vision-such as RTS (instant strategy games)-it is usually easier to implement. This part of the Renderer is usually done by the software, rather than the video card. It is only a matter of time for the video card to do this part of the work.

Basic graphic pipeline process
In a simple example, the process of drawing a graphic pipeline from a game to a polygon is roughly as follows:
· The Game decides what objects are in the game, their models, the textures used, the animations they may be in, and their locations in the game world. The game also determines the camera position and direction.

· The Game passes this information to the Renderer. Taking the model as an example, the Renderer first needs to check the model size, camera location, and then decide whether the model is fully visible on the screen or on the left side of the observer (camera view, behind the observer, or far away and invisible. It even calculates whether the model is visible in some ways of world determination. (See the following article)

· The World visualization system determines the camera's position in the world, and determines which regions/polygon of the world are visible Based on the camera's vision. There are many ways to accomplish this task, including dividing the world into violent ways in many regions, each region is "I can see area AB & C from Area D ", to the more refined BSP (split by binary space) World. All polygon passing through these removal tests are passed to the polygon Renderer for rendering.

· For each polygon passed to the Renderer, The Renderer follows the local Mathematics (that is, Model Animation) and the world mathematics (relative to the camera location ?) Convert the polygon and check whether the polygon is mapped to the camera (that is, away from the camera ). The polygon On the back is discarded. The non-back polygon is illuminated by the Renderer Based on nearby lights found. The Renderer then looks at the texture used by the polygon and determines that the API/graphics card is using that texture as its rendering base. Here, polygon is sent to the rendering API and then to the video card.

Obviously, this is too simple, but you probably understand it. The following figure is taken from the Dave Salvator's 3D pipeline. Here are some details:

3D Pipeline
-High-level overview
1. Applications/scenarios
· Scenario/geometric database Traversal
· Object movement, observe the movement and aiming of the camera
· Animated motion of Object Models
· 3D world Content Description
· Object visibility check, including possible Occlusion Removal
· Choice of details levels (SLS)

2. ry
· Transform (rotation, translation, scaling)
· Transformation from model space to World Space (direct3d)
· Changes from the world space to the observed space
· Observation projection
· Accept details/reject rejection
· Remove the back (this can also be done in the back screen space)
Illumination
· Perspective segmentation-transform to the cropping Space
· Cropping
· Transform to screen space

3. Triangle generation
· Remove the back (or complete the observation space before the illumination calculation)
· SLOPE/Angle Calculation
· Scanning line Transformation

4. rendering/Rasterization
· Coloring
· Texture
· Fog
· Alpha transparent Testing
· Deep Buffering
· Anti-aliasing (selectable)
· Display

Generally, you place all the polygon in a list and sort the list based on the texture. (In this way, you only need to send the texture to the video card once instead of each polygon once ), and so on. In the past, polygon will be sorted by their distance to the camera. First, draw the polygon that is farthest from the camera, but now this method is not so important due to the emergence of the Z buffer. Of course, except those transparent polygon, they must be drawn after all non-translucent polygon, so that all the polygon behind them can be correctly displayed in the scene. Of course, in fact, you have to draw those polygon from the back to the front. However, there are usually not many transparent polygon in any given FPS game scenario. It may look like there is, but in fact its ratio is quite low compared to those with opaque polygon.

Once the application passes the scenario to the API, the API can utilize hardware-accelerated transformation and illumination processing (T & L), which is common in today's 3D graphics. Here we will not discuss the matrix mathematics involved (see Dave's introduction to 3D pipelines). ry transformations allow 3D graphics cards to follow your attempts, depending on the camera's position and direction at any time, draw a polygon at the correct angle and position in the world.

Each vertex or vertex has a large number of computations, including cropping operations, to determine whether any given polygon is actually visible, completely invisible or partially visible on the screen. The illumination operation calculates the brightness of the texture. It depends on how the world lights are projected to the top point. In the past, the processor was processing these computations, but now, modern graphics hardware can do these things for you, which means that your processor can do other things. Obviously, this is a good thing (TM), because it cannot be expected that T & l will be available on all the 3D video card boards on the market, so in any case, you will have to write all these routines (from the perspective of game developers again ). You will see the word "things (TM)" in different sections of this article. I think these features contribute very effectively to making the game look better. Not surprisingly, you will also see the opposite of it; you have guessed that it is "bad thing (TM )". I'm trying to copyright these words. If you want to use them, you have to pay a small fee.

Curved Surface (High-Order Surface)
In addition to triangles, the use of curved surfaces is now more common. Because they can use mathematical expressions to describe ry (usually involving the ry of a certain curve), not just to list a large number of polygons and their locations in the gaming world, therefore, the curved surface (another name of the high-order surface) is very good. In this way, you can dynamically create (and deform) polygon mesh based on the equation, and decide the number of polygon you actually want to see from the curved surface. Therefore, for example, you can describe a pipeline, and then there are many examples of this pipeline in the world. In some rooms, you have displayed 10,000 polygon. You can say, "because we have already displayed a large number of polygon, and any more polygon will lower the speed, therefore, this pipeline should only have 100 polygon ". But in another room, there are only 5,000 visible polygon in the field of view. You can say, "because we have not reached the budget to display the number of polygon, the pipeline now has 500 polygon ". Nice stuff-but you must first know everything and build a grid, which is not trivial. It is indeed easier to transmit the surface equations of the same object through AGP than to transmit a large number of vertices. Sof2 uses a variant of this method to build its surface system.

In fact, the current ATI Video Card has truform, which can take a triangle-based model and convert it to a model based on a higher surface, smooth it-then convert the model back to a model based on a large number of triangles (called retesselation) by ten times the number of triangles ). The model is then sent to the pipeline for further processing. In fact, ATI only adds a stage before the T & L engine to process this process. The disadvantage here is that we need to control which models need to be processed smoothly and which do not. You often want sharp edges, such as the nose, but it is not properly smoothed out. This is still a good technology, and I can predict that it will be applied more in the future.

This is the first part-we will continue to introduce light and texture in the second part, and the following sections will be more in-depth.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.