[Android game development] game frame building

Source: Internet
Author: User

In general, the basic framework for game development typically includes the following modules:

Window Management: This module is responsible for creating, running, pausing, and resuming the game interface on the Android platform.

Input module: This module is closely related to the Windows Management module, which is used to monitor user input (such as touch events, key events, accelerometer events, etc.).

File input/output (files I/O): This module is used to read assets files, audio and other resources.

Image Module (graphics): In real game development, this module may be the most complex part. It is responsible for loading the pictures and drawing them onto the screen.

Audio module: This module is responsible for loading various frequencies of the audio in different game interfaces.

Network (Networking): This module is required if the game provides multi-player networking features.

Game Framework: This module integrates the above modules to provide an easy-to-use framework to easily implement our games.

A detailed description of each module is described below.

1. Window Management

We can think of the game's window as a canvas on which to draw the content. The Window Management module is responsible for customizing Windows, adding various UI builds, and accepting input events for various types of users. These UI components may be accelerated by hardware such as the GPU (using OpenGL ES, for example).

The module is designed not to provide an interface, but to integrate with the game framework, followed by the relevant code to post. What we need to remember is that the application state and window events are things that the module must handle:

Create: The method that is called when the window is created.

Pause: The method that is called when the application pauses for dictation reasons.

Resume: The method that is called when the application resumes to the foreground.

2. Input module

In most operating systems, input events (such as touch-screen events, keystroke events) are dispatched through the current window (dispatched), and the Windows further send these events to the currently selected component. So we just need to focus on the event of the component. The UI APIs provided by the operating system provide an event distribution mechanism, and we can easily register and listen for events, which is also the primary responsibility of the input module. There are two ways to handle events:

Polling (Polling): In this mechanism, we only check the current state of the input device, before and after the state has not been saved. This input event processing is suitable for handling input such as touch-screen button events rather than tracking text, because the order of key events is not saved.

Event-based processing (event-based handling): This mechanism provides event handling for memory functions, and is more suitable for handling text input or other operations that require key ordering.

In the Android platform, there are three main types of input events: touch-screen events, keystroke events, and accelerometer events, the first two of which use a polling mechanism and an event-based mechanism, and the accelerometer event is usually a polling mechanism.

There are three types of touchscreen events:

touch down: Occurs when the phone touches the screen.

Touch Drag: Occurs when a finger is dragged, preceded by a Touch down event.

Touchup: Occurs when the finger is lifted.

Each touch event has relevant auxiliary information: the position of the touch screen, the index of the pointer (used to track and identify different contacts when multi-touch)

Keyboard events include two types:

Keydown: triggered when the keyboard is pressed.

Keyup: Triggered when the keyboard is released.

Each keystroke event also has relevant auxiliary information: The Key-down event stores the key code, the Key-up event stores the key code, and the actual Unicode character.

Accelerometer event, the system continually polls the accelerator state and identifies it in three-bit coordinates.

Based on the above, some interfaces of the input modules are defined below to poll for touch-screen events, key events, and accelerometer events. The code is as follows:

Input.java

Package com.badlogic.androidgames.framework;

Import java.util.List;

Public interface Input {

public static class KeyEvent {
public static final int key_down = 0;
public static final int key_up = 1;

public int type;
public int keycode;
public Char KeyChar;
}

public static class TouchEvent {
public static final int touch_down = 0;
public static final int touch_up = 1;
public static final int touch_dragged = 2;

public int type;
public int x, y;
public int pointer;
}

public boolean iskeypressed (int keycode);

public boolean istouchdown (int pointer);

public int Gettouchx (int pointer);

public int gettouchy (int pointer);

public float getaccelx ();

public float getaccely ();

public float Getaccelz ();

Public list<keyevent> getkeyevents ();

Public list<touchevent> gettouchevents ();
}

The above definition consists of two static classes: KeyEvent and TouchEvent. Both the KeyEvent class and the TouchEvent class define constants for related events. KeyEvent also defines several variables that store event information: type, key code (keycode), and Unicode character (KeyChar). TouchEvent also defines the location information (x, y) and the ID of a touch point. For example, the first finger is pressed, the ID is 0, the second finger presses the ID is 1, if two fingers press, the finger 0 is released, the Finger 1 remains, and another finger is pressed, the ID is released 0.

Here is the polling method in the input interface: the input.iskeypressed () input parameter is KeyCode, returns the Boolean value of whether the corresponding key is pressed; Input.istouchdown (), Input.gettouchx () and Input.gettouchy () returns whether the given pointer index is pressed, corresponding to the coordinate value of the coordinates, and note that the coordinate value is undefined when the corresponding pointer index does not exist. Input.getaccelx (), input.getaccely (), and Input.getaccelz () return the coordinate values of their respective accelerometers, and the last two methods are used as event-handling mechanisms, They return keyevent and TouchEvent instances, which record the last event-triggered information, with the latest events at the end of the list.

Through these simple interfaces and classes, we built our input interface. The following section continues to analyze the contents of the file processing (document I/O).

3. File read/write (Files I/O)

Read-write files are a very important feature in game development. In Java development, we focus primarily on InputStream and OutputStream and their instances, which are standard methods of reading and writing files in Java. Game development, more is to read resource files, such as configuration files, pictures, audio files, etc., the operation of writing files is generally used when saving user progress and configuration information.

Here is the interface for file read and write:

Fileio.java

Package com.badlogic.androidgames.framework;

Import java.io.IOException;
Import Java.io.InputStream;
Import Java.io.OutputStream;

Public interface FileIO {
Public InputStream Readasset (String fileName) throws IOException;
Public InputStream readFile (String fileName) throws IOException;
Public OutputStream WriteFile (String fileName) throws IOException;
}

In the code above, we pass a file name as a parameter, return a stream, and the execution will be reflected in the implementation of the interface. At the same time we throw a IOException exception to prevent the read-write file from going wrong. When we have finished reading and writing, we need to close the input and output stream. The Asset file is read from the application's apk file, and other files are generally read from the built-in memory or sdcard, corresponding to the three methods in the code above.

4. Audio module

Audio module programming has always been a complex topic. It is not intended to use some advanced sophisticated audio processing techniques, mainly to play some background music. Before writing the code, let's take a look at the basics of audio.

Sample rate: Defines the number of samples per second that are extracted from the continuous signal and composed of discrete signals, the higher the sample rate, the better the sound quality, the unit is expressed in hertz (Hz), the CD is generally 44.1KHz. For each sampling system will be allocated a certain storage bit (bit number) to express the sound wave amplitude state, known as the sampling resolution or sampling accuracy, each additional 1 bit, the number of States to express the amplitude of the wave is doubled, and increase the dynamic range of 6db, 1 2bit digital audio system to express the thousands of states, That is, the dynamic range of 12db, and so on. If 16bit can express 65536 states, 24bit can express up to 16777216 states. Dynamic range refers to the range of sound from the weakest to the strongest, and the auditory range of the human ear is usually 20hz~20khz. High sample rate means more storage space. such as 60s sound, sampling rate of 8KHz, 8bits, about 0.5M, sample rate 44KHz, 16bits, over 5M, ordinary 3 minutes of popular songs, will be more than 15M.

In order to not reduce the quality has not occupied space, a lot of better compression methods are proposed. For example, MP3s and Oggs formats are popular compression formats in the network.

You can see that the 3min songs occupy a lot of space. When we play the background music of the game, we can stream the audio instead of preloading it into memory. There is usually only one background music, so you only need to load the disk once.

For some short sounds, such as explosions and gun shots, the situation is different. These short sounds are often called multiple times at the same time, and it is not a good idea to stream these sounds from disk to each instance. Fortunately, short sounds don't take up too much memory space, so simply read these sounds in advance into memory, and then play them directly at the same time.

So our code needs to provide the following features:

We need a way to load the audio file for streaming playback (Music) and memory playback (sound) while providing control playback functionality.

The corresponding interface has three, Audio, music and sound, the code is as follows.

Audio Interface Audio.java

Package com.badlogic.androidgames.framework;

Public interface Audio {
Public Music newmusic (String filename);
Public sound Newsound (String filename);
}

The audio interface creates a new music and sound instance. A music instance represents a streaming audio file, and a sound instance represents a short audio effect saved in memory. Methods Audio.newmusic () and Audio.newsound () both take the file name as arguments and throw IOException in case the file fails to load (for example, the file does not exist or the file is corrupted).

Music interface Music.jva

Package com.badlogic.androidgames.framework;

Public interface Music {

public void play ();

public void Stop ();

public void pause ();

public void Setlooping (Boolean looping);

public void SetVolume (float volume);

public boolean isplaying ();

public boolean isStopped ();

public boolean islooping ();

public void Dispose ();
}

The music interface is a bit more complex and contains a method for playing the stream, tentative and stop, looping, and volume control (floating-point numbers from 0 to 1). There are, of course, some getter methods to get the status of the current music instance. When we no longer need the music instance, we can destroy it (the Dispose method), which shuts down the system resource, the streaming audio file.

Sound interface Sound.java

Package com.badlogic.androidgames.framework;

Public interface Sound {

public void Play (float volume);

public void Dispose ();

}

The sound interface is relatively simple and contains only the play () and Dispose () methods. The former takes the specified volume as the input parameter, and we can play the sound when we need it. The latter, when we don't allow the sound instance, we need to destroy it to free up the memory space it occupies.

5. Image Module (graphics)

The last module is an image manipulation module, which is used to draw images to the screen. However, in order to draw high-performance images, we have to understand some basic knowledge of image programming. Let's start with drawing 2D images, the first thing to understand is: How exactly does the image draw to the screen? The answer is quite complicated and we don't need to know all the details.

raster, pixel, and frame buffering (framebuffers)

Now the display is based on the grating, the grating is a two-dimensional lattice composition, that is, like Div. The length and width of grating lattice, we generally use pixels to represent. If you look closely at the monitor (or with a magnifying glass), we can see that there is a grid on the monitor, which is the pixel grid or raster. The position of each pixel can be expressed in coordinates, so a two-dimensional coordinate system is introduced, which also means that the coordinate value is an integer. The monitor receives a stream of images from the graphics processor, decodes each pixel's color (program or operating system settings), and draws it to the screen. The display will be refreshed several times per second, the refresh frequency is Hz, for example, the main LCD display refresh rate is 85Hz.

The graphics processor needs to get the pixel information from a special storage area for display on the display, which is called the video memory area, or called VRAM. This area is generally referred to as the frame buffer (framebuffer). So a full screen graphic is called a frame. For each pixel in the display raster, there is a corresponding memory address in the frame buffer. When we need to change the screen display, we just need to simply change the contents of the frame buffer.

Is the simple display grid and the frame buffer:

Vertical Sync and double buffering

Ordinary drawing method, when the object to be drawn is too complex, especially contains a bitmap, then the screen will show very slow, for the movement of the screen, will give people "card" live feeling, and sometimes cause the screen to blink. So we use a double buffering technique (with two framebuffer). The principle of double buffering can be interpreted in such a way that the computer screen is viewed as a blackboard. First, we create a "virtual" blackboard in the memory environment, and then draw complex graphics on the blackboard, and when all the graphics are drawn, once again, the graphics in memory are "copied" onto the other Blackboard (screen). This method can improve the drawing speed and greatly improve the drawing effect. Here are the schematic diagrams:

To know what vertical synchronization is, you must first understand how the monitor works. All the images on the monitor are scanned on the first line, with 2 synchronization parameters-horizontal and vertical-synchronized, whether interlaced or progressive. The horizontal synchronization signal determines the CRT to draw a cross-screen line of time, vertical synchronization signal determines the CRT from the top of the screen to draw to the bottom, and then return to the original location of time, and precisely vertical synchronization represents the CRT display refresh rate level!

Turn off vertical synchronization: We usually run the operating system General screen refresh rate is generally in the 85Hz, the video card will be each in accordance with the 85Hz frequency time to send a vertical synchronization signal, signal and signal time interval is 85 of the resolution to write a screen image time.

Open Vertical Sync: In the game, perhaps a strong graphics card quickly finished drawing a screen image, but there is no vertical synchronization of the arrival of the signal, the video card can not draw the next screen, only 85 units of the signal to arrive, to be drawn. This way FPS is naturally subject to operating system refresh rate running value. That is, of course, if your game screen fps can reach or exceed the refresh rate of your display, then your game screen fps is limited to the refresh rate of your display. If the jump frame is not up to the point, the higher the FPS and refresh rate gap, the more serious the jump frame. Generally for high-performance graphics card recommended Punch, the game screen will be better! After opening can prevent the game screen high-speed movement of the picture tearing phenomenon, such as live football.

Turn off vertical synchronization, then in the game to complete a screen, video cards and monitors do not have to wait for vertical synchronization signal, you can start the next screen image drawing, natural can fully play the power of the graphics card.

However, do not forget, it is because of the existence of vertical synchronization, to make the game process and display refresh rate synchronization, so that the picture smooth, so that the picture is stable. Cancellation of the vertical synchronization signal, although it can be replaced faster, but in the continuity of the image, performance is bound to be discounted. This is also the theoretical reason why the screen is not continuous after the vertical synchronization is turned off!

Image format

The more popular two graphics formats are JPEG and PNG. JPEG is lossy compression format, PNG is lossless compression format, so the PNG format can reproduce the original image hundred percent. lossy compression format typically consumes less disk space. The total compression format we use depends on our disk space. Like audio, when we load into memory, we need to extract an image completely. So even if your compressed image is only 20K on disk, you still need to widthxheightxcolor depth storage space in RAM.

Image Overlay

Suppose there is a frame buffer (framebuffer) that we can render, and there are several pictures loaded into RAM, we laugh at the need to put the images in RAM into the frame buffer, such as a background image and a foreground image:

This process is called image synthesis and overlay, and we need to synthesize different images into a final display image. It is important to draw this picture because the image above will always cover the image below.

There is a problem with the image composition above: The white background of the second picture covers the first background image. How do we eliminate the white background of the second picture? This requires alpha blending (alpha blending). Alpha Blending is a method of calculating the color value of the source point and the color values of the target points according to certain arithmetic, and obtains a transparent effect.

The following is the RGB value of the final composite image, the formula is as follows

red = src.red * src.alpha + dst.red * (1 – src.alpha)blue = src.green * src.alpha + dst.green * (1 – src.alpha)green = src.blue * src.alpha + dst.blue * (1 – src.alpha)  

SRC and DST are the source and target images we need to mix (the source image corresponds to the character, the target image corresponds to the background). Here is an example.

src = (1, 0.5, 0.5), src.alpha = 0.5, dst = (0, 1, 0)red = 1 * 0.5 + 0 * (1 – 0.5) = 0.5blue = 0.5 * 0.5 + 1 * (1 – 0.5) = 0.75red = 0.5 * 0.5 + 0 * (1 – 0.5) = 0.25

The effect is as shown

The above formula uses two times multiplication, the multiplication consumes more time, in order to increase the computation speed, may carry on the optimization. Such as

Red = (src.red-dst.red) * Src.alpha + dst.red

Alpha is a floating-point number, and we can convert it to integer, because a color is up to 8Bit, so the alpha value is up to 256, so we multiply the alpha value by 256 and then divide by 256 to get the following formula:

Red = (src.red-dst.red) * src.alpha/256+ dst.red

Here, Alpha is a value from 0 to 256.

For this example, we only need to set the alpha value of the white pixel of the source file to 0. The final effect is as follows:

Interface code of the image module

Through the above introduction, we can begin to design the interface of our image module. Ask for the following functions:

    • Load pictures from disk into memory and prepare for drawing to the screen later.
    • Clear framebuffer with a specific color
    • Draws the pixel at the specified position in framebuffer with the specified color.
    • Draw lines and rectangles on the framebuffer.
    • Draw the above in-memory image to framebuffer, which can be drawn throughout and partially drawn, alpha blended.
    • Get the framebuffer of the long width.

This is achieved with two interfaces: Graphics and Pixmap, the following is the graphics interface:

Package com.badlogic.androidgames.framework;

Public interface Graphics {

public static enum Pixmapformat {

ARGB8888, ARGB4444, RGB565

}

Public Pixmap Newpixmap (String fileName, pixmapformat format);

public void Clear (int color);

public void Drawpixel (int x, int y, int color);

public void DrawLine (int x, int y, int x2, int y2, int color);

public void DrawRect (int x, int y, int width, int height, int color);

public void Drawpixmap (pixmap pixmap, int x, int y, int srcx, int srcy, int srcwidth, int srcheight);

public void Drawpixmap (pixmap pixmap, int x, int y);

public int getwidth ();

public int getheight ();

}

Enumeration Pixmapformat preserves the color values (including transparency) of the pixels supported by the game. For example, argb8888,a for transparency, R for Red, G for Green, b for blue, they do not use 8-bit to indicate, that is, there are 256 different states. Next look at the method of the interface:

    • The Graphics.newpixmap () method loads a picture in the specified format.
    • The Graphics.clear () method clears the framebuffer with a specific color.
    • The Graphics.drawpixel () method draws the pixels of a given color at the specified position in the framebuffer.
    • The Graphics.drawline () and Graphics.drawrect () methods draw lines and rectangles in framebuffer.
    • The Graphics.drawpixmap () method draws the image to the framebuffer. The (x, y) coordinates specify the starting position of the framebuffer drawing, and the parameters SRCX and Srcy Specify the starting position of the part where the picture is drawn. Srcwidth and Srcheight developed the width and height of the drawing.
    • The Graphics.getwidth () and Graphics.getheight () methods return the width and height of the framebuffer.
Next we look at the Pixmap interface:
Pixmap interface

Package com.badlogic.androidgames.framework;

Import Com.badlogic.androidgames.framework.Graphics.PixmapFormat;

Public interface Pixmap {

public int getwidth ();

public int getheight ();

Public Pixmapformat GetFormat ();

public void Dispose ();

}
    • The Pixmap.getwidth () and Pixmap.getheight () methods return the width and height of the image.
    • Pixmap.getformat () returns the format of the picture.
    • Pixmap.dispose () method. Pixmap instances use memory resources and other potential system resources, and if we do not need it, we need to recycle resources, which is also the role of this method.

6. Game Frame

After all the basic work has been done, let's finally explore the framework of the game itself. Let's look at what else we need to do to run our game:

    • The game is divided into different screens (screen), each performing the same task: Judging user input, rendering the screen according to the input. Some programs may not require any user input, but will switch to the next screen after a period of time. (e.g. Splash interface)
    • The screen needs to be managed in some way (e.g. we need to track the current screen and can switch to the next screen at any time)
    • The game needs to allow the screen to access different modules (such as modules, audio modules, input modules, etc.) so that the screen can load resources, get user input, play sounds, render buffers, and so on. Because our games are real-time games, we need a quick update of the current screen. We therefore need a main loop to implement. The main loop ends when the game exits. Each iteration of the loop becomes a frame, and the number of frames per second we become frame rate (FPS).
    • The game needs to track the status of the window (such as whether or not to pause the game or resume) and notify the corresponding processing event.
    • The game framework needs to handle the creation of Windows, UI components, etc.

Here's a look at some of the code:

Createwindowanduicomponent ();

Input input = new input ();

Graphics graphics = new graphics ();

Audio audio = new audio ();

Screen Currentscreen = new MainMenu ();

Float lastframetime = CurrentTime ();


while (!userquit ()) {

float deltatime = currenttime () –lastframetime;

Lastframetime = CurrentTime ();

Currentscreen.updatestate (input, deltatime);

Currentscreen.present (graphics, audio, deltatime);

}

Cleanupresources ();

The code first creates the game's window and UI component (the Createwindowanduicomponent () method), and then we instantiate the basic components that guarantee the implementation of the game's basic functionality. We also instantiate our Start screen and use it as the current screen. Then make a note of the current time.

Then we enter the main loop and we can end the main loop when the user wants to exit. In the main loop, the time difference between the previous frame and the current frame is calculated to calculate the FPS. Finally, we update the status of the current screen and present it to the user. The Updatestate method relies on the time difference and input state, and the present method includes rendering the state of the screen to Framebuffer, playing audio, and so on. The present method also needs to know the time difference from the last call to the present.

When the main loop is over, we need to clean up and release all the resources.

This is how the game works: Process the user's input, update the status, and present it to the user.

Game and display interface

Here are the interfaces required for the game to run:

    • Create window-in and UI, and set up the appropriate event mechanism
    • Turn on the main loop of the game
    • Keeps track of the current screen display and updates it in each main loop
    • The events in the UI thread are transferred to the main thread, and the events are passed to the current display to synchronize the changes.
    • Make sure you have access to all of the game's basic modules, such as input, Fileio,graphics, and Audio.

Here is the code for the game interface:

Package com.badlogic.androidgames.framework;

Public interface Game {

Public Input getinput ();

Public FileIO Getfileio ();

Public Graphics getgraphics ();

Public Audio Getaudio ();

public void Setscreen (screens screen);

Public screen Getcurrentscreen ();

Public screen Getstartscreen ();

}

As shown above, there are getter methods in the code to return an instance of the module.

The Game.getcurrentscreen () method returns the currently active screen, followed by an abstract class Androidgame to implement the interface, which implements all methods except Game.getstartscreen (). In the actual game if we create an instance of Androidgame, we need to inherit androidgame and reload the Game.getstartscreen () method, returning an instance of the first display screen.

To let you know how simple it is to build a game using the above method, here is an example (assuming we have implemented the Androidgame Class):

public class Myawesomegame extends Androidgame {

Public screen Getstartscreen () {

return new Mysuperawesomestartscreen (this);

}
}

It's simple, isn't it? All we have to do is execute the start screen of our game display. We inherit the Androidgame class to do other work. From this point of view, the Androidgame class will require Mysuperawesomestartscreen to update and re-render itself in the main loop. Note that we have passed the Myawesomegame instance to Mysuperawesomestartscreen.

The following is the abstract class screen, which is an abstract class rather than an interface, because we can write in advance of some of the methods used in the subclass to alleviate the implementation of subclasses. The code is as follows:

The screen Class

Package com.badlogic.androidgames.framework;

Public abstract class Screen {

Protected final game game;

Public screen (game game) {

This.game = game;

}

public abstract void Update (float deltatime);

public abstract void Present (float deltatime);

public abstract void Pause ();

public abstract void resume ();

public abstract void Dispose ();
}

The constructor receives a game instance and saves it to a final variable that all subclasses can access. With this mechanism we can accomplish two things:

We can play audio, draw planes, get user input, and read and write files through an instance of the game class.

At the right time we can set a new current flat display by calling Game.setscreen ().

Methods Screen.update () and Screen.present (): They update the plane and display synchronously. The game practice calls them in the main loop.

The methods Screen.pause () and Screen.resume () are called when the game is paused and resumed, as are the two methods that are called by the instance of the play and are notified to the current plane display.

Method Screen.dispose (), when the Game.setscreen () method is called, Screen.dispose () is called by the instance of the game. Using this method, the game's instance destroys the current display screen and frees all relevant system resources to provide the maximum memory for the new screen window. Screen.dispose () is also the last method for content persistence.

[Android game development] game frame building

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.