Scalable video buffers through OpenGL ES blending mode to fit the display dimensions _android

Source: Internet
Author: User
Tags clear screen int size mixed
When developing a game based on software mode, it is one of the most difficult problems to fit the display size by scaling the video buffer. When faced with many different resolutions (such as Android in an open environment), the problem becomes even more problematic, and as a developer, we have to try to find the best balance between performance and display quality. As we see in chapter 2nd, scaling the video buffer from the slowest to fastest is a total of 3 types.

Software Simulation: 3 The slowest in type, but easiest to implement, is the best choice for older devices that do not have a GPU. But most smartphones now support hardware acceleration.
Blending Mode: This approach mixes the two modes of software emulation (creating image buffers) and hardware rendering (rendering to the display). This method is fast and can render images on any screen with a resolution greater than 256x256.
Hardware acceleration Mode: The fastest of the 3 types, but the hardest to achieve. This depends on the complexity of the game and requires a more robust GPU. If you have good hardware, this method can create stunning quality and effect. But it will be a very difficult choice on a platform where the device is divided, such as Android.

Here, we choose the second way and the best choice on the platform where the device is split. You have a software renderer and want to fit the game to any resolution screen. This method is very suitable for simulator games, arcade games, simple shooting games and so on. It is good at all kinds of low-end, midrange, high-end equipment.

Let's begin by introducing the hybrid model and exploring why this approach is more feasible. The implementation of this method is then studied in depth, including how to initialize the surface and draw to the texture by actual scaling.
1. Why use mixed scaling
The rationale behind this scaling technique is simple:
Your game create an image buffer based on the given size (usually in pixel format RGB565, the most commonly used format for mobile devices). For example 320x240, this is the typical simulator size.
When an image with a resolution of 320x240 needs to be scaled to a tablet size (1024x768) or any other device of the same screen, we can use software emulation to zoom in, but it can be slow and unbearable. With mixed mode scaling, you need to create an OpenGL ES texture and render the picture (320x240) onto the GL quadrilateral.
Textures will be scaled to fit the screen size (1024x768) and your game performance will be significantly improved.
From the perspective of implementation, this process can be described as follows:
Initialize the OpenGL ES texture: The hardware surface must be created during the game video initialization phase. It contains a simple texture and the video image to be displayed is rendered to the texture (see Code Listing 1 and Listing 2).
Draws the image buffer to the texture: at the end of the game loop, render the video image to be displayed to the texture, which is automatically scaled to fit the screen size (see Listing 3).
Code Listing 1 creates an empty texture in RGB656 format
Copy Code code as follows:

<span style= "font-size:14px" >//Texture ID
static unsigned int mtextureid;
is used to calculate the x, y offset of the picture drawn on the texture
static int xoffset;
static int yoffset;
/**
* Create an empty texture in RGB565 format
* Parameters: (w,h) texture width, height
* (X_offsety_offset): The x, y offset of the picture drawn on the texture
*/
static void CreateEmptyTextureRGB565 (int w, int h, int x_offset, int y_offset)
{
int size = w * H * 2;
Xoffset = X_offset;
Yoffset = Y_offset;
Buffer
unsigned short * pixels = (unsigned short *) malloc (size);
memset (pixels, 0, size);
Initializing GL State
Gldisable (Gl_dither);
Glhint (Gl_perspective_correction_hint, gl_fastest);
Glclearcolor (. 5f,. 5f,. 5f, 1);
Glshademodel (Gl_smooth);
Glenable (gl_depth_test);
Glenable (gl_texture_2d);
Creating textures
Glgentextures (1, &mtextureid);
Glbindtexture (gl_texture_2d, Mtextureid);
Texture parameters
Gltexparameterf (gl_texture_2d, gl_texture_min_filter,gl_nearest);
Gltexparameterf (gl_texture_2d, gl_texture_mag_filter,gl_linear);
Gltexparameterf (gl_texture_2d, gl_texture_wrap_s, Gl_clamp_to_edge);
Gltexparameterf (gl_texture_2d, gl_texture_wrap_t, Gl_clamp_to_edge);
Texture in RGB565 format
Glteximage2d (gl_texture_2d, 0, Gl_rgb, W, h, 0, Gl_rgb, gl_unsigned_
Short_5_6_5, pixels);
Free (pixels);
} </SPAN>

Listing 2 shows the implementation of the CreateEmptyTextureRGB565, creating an empty texture in RGB656 format for drawing with the following parameters:
W and H: The dimensions of the video picture to display.
X_offset and Y_offset: The offset of the x and Y axes in the coordinate system, the video image will be rendered to the texture according to this coordinate. But why do we need these parameters? Please continue reading.
To create textures in OpenGL, we only need to call:
Copy Code code as follows:

<span style= "font-size:14px" >glgentextures (1, &mtextureid);
Glbindtexture (gl_texture_2d, Mtextureid);</span>

The Mtextureid here is an integer variable that stores the ID of the texture. You then need to set the following texture parameters:
Gl_texture_min_filter: Specifies how the texture is shrunk, and when the pixel is textured and mapped to a region larger than a single texture element, the reduction is gl_nearest, returning the value of the texture element from the nearest (Manhattan distance) to the center of the pixel being textured.
Gl_texture_mag_filter: Specifies how the texture is magnified, and when the pixel is textured and mapped to an area less than or equal to a single texture element, the magnification is gl_linear, returning the weighted average of 4 of the most recent texture elements from the center of the pixel being textured.
gl_texture_wrap_s: Used to set the texture map of the S axis in the texture coordinate is gl_clamp, the texture coordinates are limited to (0,1) range, when mapping a single image to the object, can effectively prevent the screen overlap.
Gl_texture_wrap_t: The method used to set the texture mapping in the direction of the T axis in the texture coordinate system is gl_clamp.
Finally, we specify two-dimensional textures through the glteximage2d function and the following parameters:
GL_TEXTURE_2D: Specifies the type of the target texture as a two-dimensional texture.
Level: Specifies the degree of detail of the image texture. 0 is the most basic image texture layer.
Internal format: Specifies the color component of the texture, in this case, the RGB format.
Width and Height: the size of the texture must be a power of 2.
Format: Specifies the formatting of the pixel data and must be the same as the internal format.
Type: Specifies the data type of the pixel data, in this case, using the RGB565 (16-bit) format.
Pixels: A pointer to an in-memory image data that must be encoded using RGR656.
Note: The size of the texture must be a power of 2, such as 256, 512, 1024, and so on. However, the size of the video image to be displayed can be any size. This means that the size of the texture must be a power greater than or equal to 2 of the size of the video image to be displayed. We'll make a detailed introduction later.
Now, let's take a look at the actual implementation of the hybrid video scaling, and the next two sections will show you how to initialize the surface for scaling and how to implement the actual drawing.
2. Initialization of surface
To scale, you must ensure that the size of the texture is greater than or equal to the size of the video image that you want to display. Otherwise, when the image is rendered, you will see a white or black screen. In Listing 2, the Jni_rgb565_surfaceinit function ensures that a valid texture dimension is produced. Use the width and height of the image as a parameter, then call the Getbesttexsize function to get the texture size closest to the requirement, and finally create an empty texture by calling the CreateEmptyTextureRGB565 function. Note that if the image size is less than the texture size, it is placed at the center of the screen by calculating the offset of the x and Y coordinates.
Code Listing 2 initialization Surface
Copy Code code as follows:

<span style= "font-size:14px" >//gets the next pot texture dimension that is greater than or equal to the image size (WH)
static void Getbesttexsize (int w, int h, int *tw, int *th)
{
int width = 256, height = 256;
#define MAX_WIDTH 1024
#define Max_height 1024
while (Width < w && width < max_width) {width *= 2;}
while (height < h && Height < max_height) {height *= 2;}
*TW = width;
*th = height;
}
/**
* Initialization of RGB565 surface
* Parameter: (w,h) Image width height
*/
void Jni_rgb565_surfaceinit (int w, int h)
{
Width and height of minimum texture
int TEXW = 256;
int TEXH = 256;
Get the texture size (must be pot) >= WxH
Getbesttexsize (W, H, &texw, &AMP;TEXH);
Picture in the center of the screen?
int OFFX = TEXW > w? (texw-w)/2:0;
int offy = TEXH > H? (texh-h)/2:0;
if (W > TEXW | | h > TEXH)
printf ("Error:invalid surface Size%sx%d", W, h);
Create an OpenGL texture for rendering
CreateEmptyTextureRGB565 (TEXW, TEXH, OFFX, Offy);
}
</SPAN>

3. Draw to texture
Finally, in order to display the image on the screen (also known as surface flip), we call the Jni_rgb565_flip function, whose arguments are the array of pixels (using RGR656 encoding) and the size of the image to be displayed. The Jni_rgb565_flip function draws the image to the texture and swaps the buffer by calling DrawIntoTextureRGB565. Note that the function of the swap buffer is coded in Java, not in C language, so we need a method to invoke the Java Exchange function. We can use the Jni method to invoke a Java method to complete the swap of the buffer (see Code Listing 3).
Code Listing 3 draws the image buffer to the texture with a quadrilateral
Copy Code code as follows:

x, y, and Z coordinates of <span style= "font-size:14px" >//quadrilateral vertices
static const FLOAT Vertices[] = {
-1.0f, -1.0f, 0,
1.0f, -1.0f, 0,
1.0f, 1.0f, 0,
-1.0f, 1.0f, 0
};
Quadrilateral coordinates (0-1)
static const FLOAT Coords[] = {
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f,
};
Quadrilateral Vertex Index
static const unsigned short indices[] = {0, 1, 2, 3};
/**
* Use quadrilateral pixels (RGB565 unsigned short) to draw the group of pixels to the full screen
*
*/
static void DrawIntoTextureRGB565 (unsigned short * pixels, int w, int h)
{
Clear Screen
Glclear (Gl_color_buffer_bit | Gl_depth_buffer_bit);
Enable vertex and texture coordinates
Glenableclientstate (Gl_vertex_array);
Glenableclientstate (Gl_texture_coord_array);
Glactivetexture (GL_TEXTURE0);
Glbindtexture (gl_texture_2d, Mtextureid);
Gltexsubimage2d (gl_texture_2d, 0, Xoffset, Yoffset, W, H, Gl_rgb,
Gl_unsigned_short_5_6_5, pixels);
Draw Quadrilateral
Glfrontface (GL_CCW);
Glvertexpointer (3, gl_float, 0, vertices);
Glenable (gl_texture_2d);
Gltexcoordpointer (2, gl_float, 0, coords);
Gldrawelements (Gl_triangle_fan, 4, gl_unsigned_short, indices);
}
Flip surface (draw into texture)
void Jni_rgb565_flip (unsigned short *pixels, int width, int height)
{
if (! pixels) {
Return
}
DrawIntoTextureRGB565 (pixels, width, height);
The gles buffer must be swapped here
Jni_swap_buffers ();
}
</SPAN>

render to textures using OpenGL
(1) Use Glclear (Gl_color_buffer_bit |gl_depth_buffer_bit) to clear the color and depth buffer.
(2) Enable client state: Writes the vertex array and the texture coordinate array when the Gldrawelements function is invoked.
(3) Select the texture unit to activate through the Glactivetexture function, the initial value is GL_TEXTURE0.
(4) Bind the generated texture to the target that is waiting to be textured. gl_texture_2d (a two-dimensional texture) is the default texture binding target, Mtextureid is the ID of the texture.
(5) The Gltexsubimage2d function is used to specify two-dimensional texture sub-graphs, with the following parameters:
GL_TEXTURE_2D: Specifies the target texture type.
Level: Specifies the degree of detail (that is, the number of layers) of the image. 0 is the basic image texture layer.
Xoffset: Specifies the offset of the texture pixel in the x-axis direction, within the texture array.
Yoffset: Specifies the offset of the texture pixel in the y-axis direction, within the texture array.
Width: Specifies the breadth of the texture's child graph.
Height: Specifies the elevation of the texture sub-graph
Format: Specifies the formatting of the pixel data.
Type: Specifies the data type of the pixel data.
Data: Specifies a pointer to an in-memory image data.
(6) To draw quadrilateral vertices, coordinates, and indexes by calling the following functions:
Glfrontface: Enables the front of the quad-shaped shape.
Glvertexpointer: Defines a quadrilateral array of vertex data, the vertex data size is 3, the data type is gl_float, and the interval between each vertex in the array (step size) is 0.
Gltexcoordpointer: Defines a quadrilateral texture array with a texture coordinate size of 2, a data type of gl_float, and an interval of 0.
Gldrawelements: The polygon is rendered with a triangular fan (Gl_triangle_fan) in a data array with 4 vertices, a short type (gl_unsigned_short), plus a pointer to the index.
Note that from listing 3 we can see that the two axis coordinates of the quadrilateral are in the [−1,1] range. This is because the coordinate system of OpenGL is between (−1,1) and the origin (0,0) is in the Center of the window (as shown in Figure 3-10).

In an ideal world, we should not worry too much about the size of the video buffer (especially the use of software to simulate the only calibration device/renderer). This is true when you zoom video using OpenGL in Android. In this example, the size of the buffer is critical. Next you'll learn how to handle any size video, which doesn't work very well in OpenGL.
4. What happens when the size of the image is not a power of 2
As mentioned earlier, mixing scaling is perfect when the size of the image is 2 power. However, it is also possible that the image buffer is not a power of 2. For example, there is a 320x240 size video in the section dealing with the demo engine. In this case, the image is still scaled, but it is scaled to the percentage size of the texture dimension. You can see this effect in Figures 2 and 3.

In Figure 2, there are the following dimensions:
Device Monitor: 859x480
Texture: 512x256
Image: 320x240
As we can see, the image is scaled to 62% of the texture width (320/512*100) and 93% of the height
(240/256*100). Therefore, in any device with a resolution greater than 256, the image is scaled to the 62%x93% that provides the resolution of the device. Now let's look at Figure 3.

Figure 3 Image with a power that scales to 2
In Figure 3, there are the following dimensions:
Device Monitor: 859x480
Texture: 512x256
Image: 512x256
Scaling and drawing
In Figure 3, we see that the image is scaled to the device to provide a resolution of 100%, which is exactly what we want. But if the image is not a power of 2, then how do we do it? In order to solve this problem, we should:
(1) The 320x240 size image is scaled to a power of nearly 2 using the software zoom (this is 512x256).
(2) Convert the scaled surface to the RGB656 format image to be compatible with the drawinto-texturergb565 described earlier.
(3) Draw to the texture, thereby using the hardware to zoom it to the resolution of the display screen.
This solution may be slower than the method described earlier, but still be faster than pure software scaling, especially when running on high-resolution devices (such as tablets).
Listing 4 shows how to use the popular SDL_GFX library to scale SDL surface.
Code Listing 4 scaling an image with the SDL_GFX library
Copy Code code as follows:

<span style= "font-size:14px" >void jni_flip (Sdl_surface *surface)
{
if (zoom) {
If the surface is 8-bit scaling, or 8, surface is 32 rgba!
Sdl_surface * sized = Zoomsurface (Surface, Zoomx, zoomy, Smoothing_off);
JNI_FLIPBYBPP (sized);
Must be cleaned up!
Sdl_freesurface (sized);
}
else {
JNI_FLIPBYBPP (surface);
}
}</span>

scaling and drawing implementations
To enlarge/shrink SDL surface, you need to simply call the zoomsurface of the SDL_GFX library:
(1) an SDL surface.
(2) Horizontal scaling factor: (0-1)
(3) Vertical scaling factor: (0-1)
(4) Smoothing_off: In order to be able to quickly draw, the anti-aliased processing is disabled.
Next, let's flip SDL surface based on resolution (the number of bits per pixel). Code Listing 5 shows how to complete the surface of the 8-bit RBG format.
Code Listing 5 Flip SDL surface based on resolution
Copy Code code as follows:

<span style= "font-size:14px" >/**
* Flip SDL surface by the number of digits per pixel
*/
static void jni_flipbybpp (Sdl_surface *surface)
{
int BPP = surface->format->bitsperpixel;
Switch (BPP) {
Case 8:
Jni_flip8bit (surface);
Break
Case 16:
Replace 16-bit RGB (surface);
Break
Case 32:
Replace 32 with RGB (surface);
Break
Default
printf ("Invalid depth%d for surface of size%dx%d", BPP, Surface->w,
SURFACE-&GT;H);
}
}
/**
* Replace 8-bit SDL surface
*/
static void Jni_flip8bit (Sdl_surface *surface)
{
int i;
int size = Surface->w * surface->h;
int BPP = surface->format->bitsperpixel;
unsigned short pixels [size]; RGB565
Sdl_color * colors = surface->format->palette->colors;
for (i = 0; i < size; i++) {
unsigned char pixel = ((unsigned char *) surface->pixels) [i];
Pixels[i] = ((COLORS[PIXEL].R >> 3) << 11)
| ((COLORS[PIXEL].G >> 2) << 5)
| (colors[pixel].b >> 3); RGB565
}
DrawIntoTextureRGB565 (pixels, surface->w, surface->h);
Jni_swap_buffers ();
}
</SPAN>

Specify SDL surface, and then examine the format of each pixel: Surface->format->bitsperpixel, and based on that value, create a set of RGB565 pixels that can be used by DrawIntoTextureRGB565:
Copy Code code as follows:

<span style= "font-size:14px" >for (i = 0; i < SIZE; i++) {
unsigned char pixel = ((unsigned char *) surface->pixels) [i];
RGB565
Pixels[i] = ((COLORS[PIXEL].R >> 3) << 11)
| ((COLORS[PIXEL].G >> 2) << 5)
| (colors[pixel].b >> 3);
}</span>

Extract the red, green, and blue values contained in each pixel from the surface palette:
Copy Code code as follows:

<span style= "font-size:14px" >sdl_color * colors = surface->format->palette->colors;
Red:colors[pixel].r
Green:colors[pixel].g
Blue:colors[pixel].b</span>

To build RGB565 pixels, you need to discard the least significant bits from each color component:
Copy Code code as follows:

<span style= "font-size:14px" >COLORS[PIXEL].R >> 3 (8-3 = 5)
COLORS[PIXEL].G >> 2 (8–2 = 6)
COLORS[PIXEL].B >> 3 (8–3 = 5) </SPAN>

Then move each component to the correct location of the 16-bit value (5+6+5= 16--is therefore RGB656):
Copy Code code as follows:

<span style= "font-size:14px" >pixels[i] = (RED << 11) | (GREEN << 5) | Blue</span>

Finally, the new array and image width and height are sent to the DrawIntoTextureRGB565. As a final question, we need a way to tell surface if scaling is needed. The surface completes the video initialization when the first time it is created. Code Listing 6 shows how to create a software surface with SDL.
Code Listing 6 initialization scaling surface
Copy Code code as follows:

<span style= "font-size:14px" >//should be scaled?
static char zoom = 0;
Zoom range [0,1]
static double ZOOMX = 1.0;
static double zoomy = 1.0;
/**********************************************************
* The constructor of the image
* The image must be a power of 2 (256x256, 512x256,...)
* For full screen rendering with OpenGL textures. If the image is not
* POT (320x240), then it will be scaled
**********************************************************/
Sdl_surface * jni_surfacenew (int width, int height, int bpp, int flags)
{
Uint32 rmask = 0, Gmask = 0, Bmask =0, amask = 0;
Texture dimensions and offsets
int REALW = 256, REALH = 256, OFFX = 0, offy = 0;
The image must be a power of 2 so OpenGL can scale it
if (Width > 512) {
Sys_error ("Error:invalid IMAGE width%d (max POT 512x512)", WIDTH);
}
The real w/h must be close to the pot value w/h
Going to zoom to 512x256
It should be 256, but the 512 resolution is higher (slower)
if (Width > 256) REALW = 512;
Size is not pot, zoom to close to pot, you can choose:
256X256 (Fast/low resolution) 512X256 (high resolution/slower)
512X512 the slowest
if (width!= && width!= 256) | | (height!= 256)) {
Zoom = 1;
ZOOMX = REALW/(float) width;
Zoomy = REALH/(float) height;
OFFX = Offy = 0;
printf ("WARNING Texture of size%dx%d is scaled to%dx%d zoomx=%.3f
Zoomy=%.3f "
, width, height, realw, Realh, ZOOMX, zoomy);
}
Create an OpenGL texture used by the renderer
CreateEmptyTextureRGB565 (REALW, Realh, OFFX, Offy);
This is the real surface for client-side rendering video.
Return Sdl_creatergbsurface (sdl_swsurface, width, height, bpp, Rmask,
Gmask, Bmask,
Amask);
}
</SPAN>

If the size of the image is not a power of 2, the scaling flag is set to 1, and the scaling factor in both the horizontal and vertical directions begins to compute. Then, by calling CreateEmptyTextureRGB565, an empty texture is created based on the width, height, and x, y displacement of the texture. Finally, call Sdl_creatergbsurface to create the SDL surface:
Sdl_swsurface: Tell SDL to create a software surface.
Width and Height: Defines the dimensions of the surface.
BPP: Defines the number of digits (8, 16, 24, and 32) for each pixel (resolution) in the surface.
Rmask, Gmask, Bmask, and Amask: These are the mask values for red, green, blue, and alpha (transparency) in each pixel format. Set to 0来 let SDL notice it (Translator Note: Set to 0,opengl can be written; set to 1, cannot write).
the rule of thumb for mixed scaling
All in all, when using mixed scaling like this in the game, keep in mind the following rules of thumb:
If you can, always set the power of the video size to 2:256x256 or 512x56. Above 512 is too expensive for this technology.
If you do not want to set the size of the video but want to display it in full screen, you can zoom to the nearest 2 power with the SDL software, as mentioned earlier, and then zoom using the hardware.
If the video size is larger than 512x512, the blending scaling technique may not be effective (performance required).

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.