Use OpenGL ES hybrid mode to zoom in the video buffer to adapt to the display size

Source: Internet
Author: User

When developing a software-based game, scaling the video buffer to adapt to the display size is one of the toughest issues. When faced with a variety of different resolutions (such as Android in an open environment), this problem becomes more troublesome. as developers, we must try to find the optimal balance between performance and display quality. As we can see in Chapter 2nd, there are three types of video buffer zooming from the slowest to the fastest.

Software Simulation: 3, which is the most slow, but easier to implement, is the best choice for older devices without GPUs. However, most smartphones now support hardware acceleration.
Hybrid mode: This method uses both software simulation (image buffer creation) and hardware rendering (drawing to the display) modes. This method is fast and can render images on any screen with a resolution greater than 256x256.
Hardware acceleration Mode: The fastest of the three types, but the most difficult to implement. This depends on the complexity of the game and requires a more powerful GPU. If there is good hardware, this method can create amazing quality and results. However, on platforms with relatively split terminal devices, such as Android, this will be a very difficult choice.

Here, we choose the second method, which is also the best choice on the platform where the terminal devices are split. You have a software Renderer and want to adapt the game to a display with any resolution. This method is suitable for simulator games, arcade games, and simple shooting games. It performs well on various low-end, mid-end, and high-end devices.

Next we will introduce the hybrid mode and discuss why this method is more feasible. Then, we will thoroughly study the implementation of this method, including how to initialize the surface and draw the texture through actual scaling.
1. Why do I use hybrid scaling?
The principle behind this scaling technology is simple:
Your game creates an image buffer based on a given size (usually in the pixel format RGB565, the most common format for mobile devices ). For example, 320 × 240, which is a typical simulator size.
When an image with a resolution of 320x240 needs to be scaled to the tablet size (1024x768) or any other device with the same screen, we can use software simulation to scale, but it will be slow and intolerable. To scale in a hybrid mode, you need to create an OpenGL ES texture and render the image (320x240) to the GL quadrilateral.
Textures are scaled to the appropriate display size (1024x768) through hardware, significantly improving game performance.
From the implementation perspective, this process can be described as follows:
Initialize OpenGL ES texture: hardware surface must be created during game video initialization. This includes a simple texture. The video image to be displayed is rendered to this texture (see Code List 1 and code list 2 ).
Draw the image buffer to the texture: at the end of the game loop, render the video image to be displayed to the texture, which is automatically scaled to the appropriate display size (see Code List 3 ).
Code List 1 create an empty texture in RGB656 format Copy codeThe Code is as follows: <SPAN style = "FONT-SIZE: 14px"> // texture ID
Static unsigned int mTextureID;
// Used to calculate the X and y offsets of the image drawn on the texture
Static int xoffset;
Static int yoffset;
/**
* Create an empty texture in RGB565 format
* Parameter: (w, h) Texture width and height
* (X_offsety_offset): the X and y offsets of the image drawn on the texture.
*/
Static void CreateEmptyTextureRGB565 (int w, int h, int x_offset, int y_offset)
{
Int size = w * h * 2;
Xoffset = x_offset;
Yoffset = y_offset;
// Buffer zone
Unsigned short * pixels = (unsigned short *) malloc (size );
Memset (pixels, 0, size );
// Initialize the GL status
GlDisable (GL_DITHER );
GlHint (GL_PERSPECTIVE_CORRECTION_HINT, GL_FASTEST );
GlClearColor (. 5f,. 5f,. 5f, 1 );
GlShadeModel (GL_SMOOTH );
GlEnable (GL_DEPTH_TEST );
Glable (GL_TEXTURE_2D );
// Create a texture
GlGenTextures (1, & mTextureID );
GlBindTexture (GL_TEXTURE_2D, mTextureID );
// Texture parameters
GlTexParameterf (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
GlTexParameterf (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
GlTexParameterf (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
GlTexParameterf (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
// Texture in RGB565 format
GlTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, w, h, 0, GL_RGB, GL_UNSIGNED _
SHORT_5_6_5, pixels );
Free (pixels );
} </SPAN>

Code List 2 shows the implementation process of CreateEmptyTextureRGB565. Create an empty texture in RGB656 format for plotting. The parameters are as follows:
W and h: the size of the video image to be displayed.
X_offset and y_offset: the offset of the X and Y axes in the coordinate system. The video image is rendered to the texture according to the coordinates. But why do we need these parameters? Continue reading.
To create a texture in OpenGL, you only need to call:Copy codeThe Code is as follows: <SPAN style = "FONT-SIZE: 14px"> glGenTextures (1, & mTextureID );
GlBindTexture (GL_TEXTURE_2D, mTextureID); </SPAN>

Here mTextureID is an integer variable used to store the texture ID. Then you need to set the following texture parameters:
GL_TEXTURE_MIN_FILTER: Specifies the method for reducing the texture. When the pixel is texture and mapped to a region greater than a single texture element, GL_NEAREST is used, returns the value of the texture element closest to the center (Manhattan distance) after the pixel is texture.
GL_TEXTURE_MAG_FILTER: Specifies the texture amplification mode. When the pixel is texture and mapped to a region smaller than or equal to a single texture element, GL_LINEAR is used, returns the weighted average of the four texture elements closest to the center after the pixels are texture.
GL_TEXTURE_WRAP_S: used to set the texture ing mode on the S axis in the texture coordinate system to GL_CLAMP. The texture coordinates are limited to (). When a single image is mapped to an object, this effectively prevents overlapping images.
GL_TEXTURE_WRAP_T: used to set the texture ing mode on the T axis direction in the texture coordinate system to GL_CLAMP.
Finally, we use the glTexImage2D function and the following parameters to specify a two-dimensional texture:
GL_TEXTURE_2D: Specify the target Texture type as two-dimensional texture.
Level: Specifies the image texture details. 0 is the most basic image texture layer.
Internal format: Specifies the color of the texture. In this example, it is in RGB format.
Width and height: The texture size, which must be a power of 2.
Format: Specify the pixel data Format, which must be the same as the internal Format.
Type: Specifies the Data Type of the pixel data. In this example, the RGB565 (16-bit) format is used.
Pixels: pointer to the image data in the memory, which must be RGR656 encoded.
Note: The texture size must be a power of 2, such as 256, 512, and 1024. However, the size of the video image to be displayed can be any size. This means that the texture size must be greater than or equal to the power of 2 of the video image size to be displayed. We will introduce it in detail later.
Now let's take a look at the actual implementation of mixed video scaling. The next two sections will introduce how to initialize the scaled surface and how to implement the actual rendering.
2. initialize the surface
To scale, you must ensure that the texture size is greater than or equal to the size of the video image to be displayed. Otherwise, a white or black screen is displayed when the image is rendered. In code list 2, The JNI_RGB565_SurfaceInit function ensures a valid texture size. Use the image width and height as parameters, call the getBestTexSize function to obtain the nearest required texture size, and call the CreateEmptyTextureRGB565 function to create an empty texture. Note: If the image size is smaller than the texture size, the image is placed at the center of the screen by calculating the offset of the X and Y coordinates.
Code List 2 initialize surface Copy codeThe Code is as follows: <SPAN style = "FONT-SIZE: 14px"> // obtain the next POT texture SIZE, which is greater than or equal to the image SIZE (WH)
Static void getBestTexSize (int w, int h, int * tw, int * th)
{
Int width = 256, height = 256;
# Define MAX_WIDTH 1024
# Define MAX_HEIGHT 1024
While (width <w & width <MAX_WIDTH) {width * = 2 ;}
While (height

* Tw = width;
* Th = height;
}
/**
* Initialize RGB565 surface
* Parameter: (w, h) image width and height
*/
Void JNI_RGB565_SurfaceInit (int w, int h)
{
// Width and height of the minimum texture
Int texws = 256;
Int texh = 256;
// Obtain the texture size (must be POT)> = WxH
GetBestTexSize (w, h, & texw, & texh );
// Is the image in the center of the screen?
Int offx = texw> w? (Texw-w)/2: 0;
Int offy = texh> h? (Texh-h)/2: 0;
If (w> texw | h> texh)
Printf ("Error: Invalid surface size % sx % d", w, h );
// Create an OpenGL texture for rendering
CreateEmptyTextureRGB565 (texw, texh, offx, offy );
}
</SPAN>

3. Draw to texture
Finally, to display the image to the screen (also called the surface flip), we call the JNI_RGB565_Flip function. The parameter is a pixel array (encoded with RGR656) and the image size to be displayed. The JNI_RGB565_Flip function draws the image to the texture by calling DrawIntoTextureRGB565 and swap the buffer. Note that the function of the SWAp buffer is encoded in Java rather than in C language. Therefore, we need a method to call the Java swap function. We can use the JNI method to call a Java method to complete the buffer swap (see code listing 3 ).
Code List 3 use a quadrilateral to draw an image buffer to a texture Copy codeThe Code is as follows: <SPAN style = "FONT-SIZE: 14px"> // X, Y, and z coordinates of the Quadrilateral vertices
Static const float vertices [] = {
-1.0f,-1.0f, 0,
1.0f,-1.0f, 0,
1.0f, 1.0f, 0,
-1.0f, 1.0f, 0
};
// Quadrilateral coordinates (0-1)
Static const float coords [] = {
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f,
};
// Quadrilateral vertex Index
Static const unsigned short indices [] = {0, 1, 2, 3 };
/**
* Use a quadrilateral pixel (RGB565 unsigned short) to draw the pixel array to all screens
*
*/
Static void DrawIntoTextureRGB565 (unsigned short * pixels, int w, int h)
{
// Clear the screen
GlClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
// Enable vertex and texture coordinates
GlEnableClientState (GL_VERTEX_ARRAY );
GlEnableClientState (GL_TEXTURE_COORD_ARRAY );
GlActiveTexture (GL_TEXTURE0 );
GlBindTexture (GL_TEXTURE_2D, mTextureID );
GlTexSubImage2D (GL_TEXTURE_2D, 0, xoffset, yoffset, w, h, GL_RGB,
GL_UNSIGNED_SHORT_5_6_5, pixels );
// Draw a quadrilateral
GlFrontFace (GL_CCW );
GlVertexPointer (3, GL_FLOAT, 0, vertices );
Glable (GL_TEXTURE_2D );
GlTexCoordPointer (2, GL_FLOAT, 0, coords );
GlDrawElements (GL_TRIANGLE_FAN, 4, GL_UNSIGNED_SHORT, indices );
}
// Flip the surface (draw to texture)
Void JNI_RGB565_Flip (unsigned short * pixels, int width, int height)
{
If (! Pixels ){
Return;
}
DrawIntoTextureRGB565 (pixels, width, height );
// GLES buffer must be exchanged here
Jni_swap_buffers ();
}
</SPAN>

Use OpenGL to render texture:
(1) Use glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) to clear the color and depth buffer.
(2) Enable client status: When the glDrawElements function is called, write the vertex array and texture coordinate array.
(3) Use the glActiveTexture function to select the Texture unit to be activated. The initial value is GL_TEXTURE0.
(4) bind the generated texture to the target waiting for texture. GL_TEXTURE_2D (a two-dimensional texture) is the default texture binding target, and mTextureID is the texture ID.
(5) use the glTexSubImage2D function to specify a two-dimensional texture subgraph. The parameters are as follows:
GL_TEXTURE_2D: Specifies the target Texture type.
Level: Specifies the level of detail (that is, the number of layers) of the image ). 0 is the basic image texture layer.
Xoffset: Specifies the offset of the texture pixel in the X axis and in the texture array.
Yoffset: Specifies the offset of the texture pixel in the Y axis and in the texture array.
Width: Specify the width of the texture subgraph.
Height: Specify the height of the texture subgraph.
Format: Specify the format of the pixel data.
Type: Specify the Data Type of the pixel data.
Data: Specifies the pointer to the image data in the memory.
(6) Draw quadrilateral vertices, coordinates, and indexes by calling the following functions:
GlFrontFace: enables the front of the Quadrilateral.
GlVertexPointer: defines the vertex data array of the Quadrilateral. The vertex data size is 3, the data type is GL_FLOAT, and the interval (STEP) between each vertex in the array is 0.
GlTexCoordPointer: defines the Quadrilateral texture array. The texture coordinate size is 2, the data type is GL_FLOAT, and the interval is 0.
GlDrawElements: uses a data array to render a polygon in the form of a triangle fan (GL_TRIANGLE_FAN). It has four vertices, whose type is short INTEGER (GL_UNSIGNED_SHORT), and a pointer to the index.
Note: From code listing 3, we can see that the two axis coordinates of the Quadrilateral are within the range of [−1,1. This is because OpenGL's coordinate system is between (−), and the origin () is in the center of the window (3-10 ).

In the ideal world, we should not worry too much about the size of the video buffer (especially using software to simulate only the constant/Renderer ). This is true when OpenGL is used to zoom in and zoom out videos in Android. In this example, the buffer size is crucial. Next, you will learn how to deal with videos of any size, which is not very good in OpenGL.
4. What happens when the image size is not the power of 2
As mentioned above, hybrid scaling is perfect when the image size is a power of 2. However, it is also possible that the image buffer is not a power of 2. For example, There Is A 320 × 240 video in the chapter processing the Demo engine. In this case, the image is still scaled, but scaled to the percentage of the texture size. You can see this effect in figures 2 and 3.

In Figure 2, the following dimensions are available:
Device display: 859x480
Texture: 512x256
Image: 320x240
As we can see, the image is scaled to 62% (320/512*100) of the texture width and 93% of the height.
(240/256*100 ). Therefore, the image is scaled to 256 x 62% of the resolution provided by the device on any device with a resolution greater than 93%. Now let's look at figure 3.

Figure 3 scale an image of the power of 2
In Figure 3, the following dimensions are available:
Device display: 859x480
Texture: 512x256
Image: 512x256
Zoom and draw
In Figure 3, we see that the image is scaled to 100% of the resolution provided by the device, which is exactly what we want. But if the image is not a power of 2, what should we do? To solve this problem, we should:
(1) use the software scale-in to scale an image of 320x240 to a power close to 2 (512x256 ).
(2) convert a scaled surface to an image in RGB656 format to be compatible with the DrawInto-TextureRGB565 described earlier.
(3) Draw the texture and use hardware to scale it to the resolution of the display.
This solution may be slower than the method described earlier, but still scales faster than pure software, especially when running high-resolution devices (such as tablets ).
Code list 4 shows how to use the popular SDL_gfx library to scale the SDL surface.
Code list 4 Use SDL_gfx library to scale the image Copy codeThe Code is as follows: <SPAN style = "FONT-SIZE: 14px"> void JNI_Flip (SDL_Surface * surface)
{
If (zoom ){
// If the surface is 8-bit scaled, It is 8-bit; otherwise, the surface is the 32 RGBA!
SDL_Surface * sized = zoomSurface (surface, zoomx, zoomy, SMOOTHING_OFF );
JNI_FlipByBPP (sized );
// It must be cleared!
SDL_FreeSurface (sized );
}
Else {
JNI_FlipByBPP (surface );
}
} </SPAN>

Zoom and draw
To zoom in/out the SDL surface, you must simply call the zoomSurface of the SDL_gfx Library:
(1) An SDL surface.
(2) horizontal scaling factor: (0-1)
(3) Vertical Scaling Factor: (0-1)
(4) SMOOTHING_OFF: in order to be able to draw quickly, disable anti-sawtooth processing.
Next, let's flip the SDL surface Based on the resolution (the number of digits per pixel. Code List 5 shows how to complete an 8-bit RBG surface.
Code List 5 flip SDL surface Based on resolution Copy codeThe Code is as follows: <SPAN style = "FONT-SIZE: 14px"> /**
* Flip the SDL surface by the number of digits in each pixel
*/
Static void JNI_FlipByBPP (SDL_Surface * surface)
{
Int bpp = surface-> format-> BitsPerPixel;
Switch (bpp ){
Case 8:
JNI_Flip8Bit (surface );
Break;
Case 16:
// Replace the 16-bit RGB (surface );
Break;
Case 32:
// Replace 32 with RGB (surface );
Break;
Default:
Printf ("Invalid depth % d for surface of size % dx % d", bpp, surface-> w,
Surface-> h );
}
}
/**
* Replace the 8-bit SDL surface
*/
Static void JNI_Flip8Bit (SDL_Surface * surface)
{
Int I;
Int size = surface-> w * surface-> h;
Int bpp = surface-> format-> BitsPerPixel;
Unsigned short pixels [size]; // RGB565
SDL_Color * colors = surface-> format-> palette-> colors;
For (I = 0; I <size; I ++ ){
Unsigned char pixel = (unsigned char *) surface-> pixels) [I];
Pixels [I] = (colors [pixel]. r> 3) <11)
| (Colors [pixel]. g> 2) <5)
| (Colors [pixel]. B> 3); // RGB565
}
DrawIntoTextureRGB565 (pixels, surface-> w, surface-> h );
Jni_swap_buffers ();
}
</SPAN>

Specify the SDL surface, check the format of each pixel: surface-> format-> BitsPerPixel, and create an RGB565 pixel array that can be used by DrawIntoTextureRGB565 based on this value:Copy codeThe Code is as follows: <SPAN style = "FONT-SIZE: 14px"> for (I = 0; I <size; I ++ ){
Unsigned char pixel = (unsigned char *) surface-> pixels) [I];
// RGB565
Pixels [I] = (colors [pixel]. r> 3) <11)
| (Colors [pixel]. g> 2) <5)
| (Colors [pixel]. B> 3 );
} </SPAN>

Extract the red, green, and Blue values in each pixel from the surface palette:Copy codeThe Code is as follows: <SPAN style = "FONT-SIZE: 14px"> SDL_Color * colors = surface-> format-> palette-> colors;
RED: colors [pixel]. r
GREEN: colors [pixel]. g
BLUE: colors [pixel]. B </SPAN>

To build RGB565 pixels, We need to discard the lowest valid bit from each color component:Copy codeThe Code is as follows: <SPAN style = "FONT-SIZE: 14px"> colors [pixel]. r> 3 (8-3 = 5)
Colors [pixel]. g> 2 (8-2 = 6)
Colors [pixel]. B> 3 (8-3 = 5) </SPAN>

Then move each component to the correct position of the 16-bit value (5 + 6 + 5 = 16 -- therefore RGB656 ):Copy codeThe Code is as follows: <SPAN style = "FONT-SIZE: 14px"> pixels [I] = (RED <11) | (GREEN <5) | BLUE </SPAN>

Finally, send the new array along with the image width and height to DrawIntoTextureRGB565. For the last question, we need a way to tell whether the surface needs to be scaled. When the surface is created for the first time, the video Initialization is completed. Code Listing 6 shows how to use SDL to create a software surface.
Code List 6 initialize and scale the surface Copy codeThe Code is as follows: <SPAN style = "FONT-SIZE: 14px"> // should be scaled?
Static char zoom = 0;
// Zoom range [0, 1]
Static double zoomx = 1.0;
Static double zoomy = 1.0;
/*************************************** *******************
* Image Constructor
* The image must be a power of 2 (256x256,512x256 ,...)
* Enables full screen rendering with OpenGL textures. If the image is not
* POT (320x240), then it will be scaled
**************************************** ******************/
SDL_Surface * JNI_SurfaceNew (int width, int height, int bpp, int flags)
{
Uint32 rmask = 0, gmask = 0, bmask = 0, amask = 0;
// Texture size and offset
Int realw = 256, realh = 256, offx = 0, offy = 0;
// The image must be a power of 2 so that OpenGL can scale it.
If (length> 512 ){
Sys_Error ("ERROR: invalid image width % d (max POT 512 × 512)", width );
}
// The actual W/H must be close to the w/h of the POT value.
// Scale to 512x256
// It should be 256, but the 512 resolution is higher (slower)
If (width> 256) realw = 512;
// If the size is not POT, it is scaled close to POT. You can select:
// 256x256 (Fast/low resolution) 512x256 (High Resolution/low resolution)
// 512x512 slowest
If (width! = 512 & width! = 256) | (height! = 256 )){
Zoom = 1;
Zoomx = realw/(float) width;
Zoomy = realh/(float) height;
Offx = offy = 0;
Printf ("WARNING Texture of size % dx % d will be scaled to % dx % d zoomx = %. 3f
Zoomy = %. 3f"
, Width, height, realw, realh, zoomx, zoomy );
}
// Create the OpenGL texture used by the Renderer
CreateEmptyTextureRGB565 (realw, realh, offx, offy );
// This is the real surface used by the client to render the video.
Return SDL_CreateRGBSurface (SDL_SWSURFACE, width, height, bpp, rmask,
Gmask, bmask,
Amask );
}
</SPAN>

If the image size is not a power of 2, the scale sign is set to 1, and the horizontal and vertical scaling factor is calculated. Then, call CreateEmptyTextureRGB565 to create an empty Texture Based on the X and Y displacement of the width, height, and texture. Finally, call SDL_CreateRGBSurface to create the SDL surface:
SDL_SWSURFACE: tells SDL to create the software surface.
Width and height: Define the surface size.
Bpp: defines the number of BITs (8, 16, 24, and 32) per pixel (resolution) in the surface ).
Rmask, gmask, bmask, and amask: these are mask values of red, green, blue, and alpha (transparency) in each pixel format. Set it to 0 so that SDL can notice it. (if it is set to 0, OpenGL can write data. If it is set to 1, data cannot be written ).
Empirical rules of hybrid Scaling
All in all, when using mixed scaling like this in a game, remember the following rule of thumb:
If you can, always set the video size to the power of 2: 256 × 256 or 512 × 56. Higher than 512 is too costly for this technology.
If you do not want to set the video size, but want to display it in full screen, you can use the SDL software to zoom to the nearest power of 2, as mentioned earlier, and then use the hardware for Zoom.
If the video size is greater than 512x512, the hybrid scaling technology may not be effective (performance required ).

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.