Texture compression and texture compression formats
As the core processing function of cocos2dx texture, CCImage includes a wide range of underlying API calls and texture format operations, I think it is easier for us to use CCImage as the starting point to expand the cocos2dx engine.
#define CC_GL_ATC_RGB_AMD 0x8C92#define CC_GL_ATC_RGBA_EXPLICIT_ALPHA_AMD 0x8C93#define CC_GL_ATC_RGBA_INTERPOLATED_ALPHA_AMD 0x87EE
The beginning of CCImage is the three macro definitions. These three macro definitions are actually Texture Types in ATITC format:
- ATC_RGB_AMD (RGB textures)
- ATC_RGBA_EXPLICIT_ALPHA_AMD (RGB textures using explicit alpha encoding)
- ATC_RGBA_INTERPOLATED_ALPHA_AMD (RGBA textures using interpolated alpha encoding)
Speaking of this, I will first introduce texture compression. Currently, most games use textures to enrich game scenarios, textures occupy the memory space and are stored in the video memory without compression. Common Paster formats include 16 bits, 24bits, and 32 bits, even the size of the 16bits x texture in the video memory is as high as 2 MB. To speed up rendering and reduce image aliasing, you can use Mipmap to process textures into files composed of a series of pre-computed and Optimized images. Of course, Mipmap requires a certain amount of memory space.
Our common image file formats are:
- BMP: Windows standard image file format, bit ing storage format, optional image depth (lbit, 4bit, 8bit, and 24bit), no other compression;
- TGA: commonly used formats of digital images and high-quality images generated using the ray tracking algorithm. It supports compression and uses the true or false compression algorithm, and can be a band-pass track image. In addition, it supports stroke encoding compression. It is characterized by the ability to make irregular shapes of images and image files, taking into account the image quality of BMP while taking into account the size advantage of JPEG;
- JPG: The 24-bit color stores a single bitmap in a platform-independent format. It supports the highest level of lossy compression and can compress similar tones, however, the data structure in YUV color mode is not supported because the brightness is not strongly different or the solid color area is not processed.
- GIF: lossless compression format and Variable Length Compression Algorithm Based on LZW algorithm;
- PNG: The storage format of bitmap files. It can be 8-bit, 24-bit, or 32-bit.
To display a JPG image, you must decode the image and load it (the handheld device still consumes a lot of power). Then, decompress the image into the original pixel format and pass it to the video card, without the support of graphics card hardware, it is debatable to save textures in a compressed format. It is precisely because of the current game's dependence on textures that puts a great deal of pressure on the display bus. Therefore, many manufacturers provide real-time decompression functions for the hardware, however, it is a pity that no format can be supported by multiple manufacturers. OpenGL ES defines a standard interface:
GL_API void GL_APIENTRY glCompressedTexImage2D (GLenum target, GLint level, GLenum internalformat, GLsizei width, GLsizei height, GLint border, GLsizei imageSize, const GLvoid* data);
However, there is still no uniform standard for the format of texture data. As a result, once a compressed texture is used, it cannot be used across platforms.
Previous texture optimization has introduced common texture formats such as RGB565, RGBA4444, RGBA5551, RGB888, and RGBA8888. Here we should be able to understand the difference between the file format and the texture format. The file format is a special encoding method used by the image to store information. It is stored on the disk, or memory, but cannot be recognized by the GPU, because GPU, which is known as vector computing, is powerless for these complex computations. When these file formats are read by the game, they still need to be decompressed into the pixel format by the CPU and then transmitted to the GPU for use. The texture format is the pixel format recognized by the GPU, which can be quickly addressable and sampled. For example, a DDS file is a commonly used file format in game development. It can contain the texture format of RGBA4444, the texture format of RGBA8888, and the texture format of DXT1. Here, the DDS file is a bit of a container.
Here we may wonder why we cannot apply a method similar to the image compression format to texture maps. In fact, this is because when the display chip accesses textures, it is a "Random Access" action. That is to say, the display chip usually needs to access the data in the texture in any order. General compression methods, such as JPEG, all use the encoding method of the stroke length. Simply put, they must be unlocked in a certain order. Therefore, you cannot use this method to compress the texture.
Texture compression methods are divided into two types:
- Change Color Space: for example, in the YAB format of 3dfx, with YAB, each image point only needs 8 bits to achieve the effect of close to 16 bits. However, in any case, this reduces the number of colors. Therefore, the color changes of the entire texture are limited.
- Color Palette: it is an index similar to OpenGL. With a color palette of 256 colors, you can store the textures in 8 bits. However, although it has a large color space (24 bits or 32 bits), the total number of colors cannot exceed 256. Therefore, its application scope is still limited.
Currently, block-Based Texture compression is commonly used. The common practice is to cut the texture into many cell blocks and then compress each block. For example, S3TC is to cut the texture into 4 × 4 cell blocks. Using this method, you can perform some processing (usually vector quantization or deformation) on the block, and the display chip can also perform random access operations on the block. Therefore, this method is suitable for textures.
However, the block size will affect the compression effect. Generally, the larger the block, the higher the compression ratio. However, the larger the block size, the higher the extra burden. Because the display chip can only read texture data in blocks, the larger the block, the more data each block may have. Therefore, you cannot increase the block size at will.
In this paper, four features of texture compression are listed, which are different from other image compression technologies.
- Decompression speed: Since it is best to directly render the compressed texture directly, extract as quickly as possible to minimize performance impact.
- Random Access: as it is almost impossible to predict the order in which a grain is accessed, any texture compression algorithm must allow random access to the grain. Therefore, almost all texture compression algorithms compress and store grain content in blocks. When a grain is accessed, only a few grains in the same segment are read and decompressed. This demand also ruled out many image compression methods with high compression ratios, such as JPEG and stroke length encoding.
- Compression ratio and Image Quality: Due to the inaccuracy of human eyes, Image Rendering is more suitable for Lossy data compression than other application fields.
- Encoding speed: texture compression does not require high compression speed, because in most cases, texture only needs to be compressed once.
Since the data access mode is known in advance, texture compression is often used as a part of the entire drawing pipeline, and dynamic compressed data is extracted during painting. In turn, you can use texture compression technology to draw pipelines to reduce
Bandwidth and storage requirements. In texture textures, the compressed textures are basically the same as those that are not compressed. They can be used to store color data or other data, such as concave and convex textures or normal textures, it can also be used together with Mipmapping or heterosexual filtering.
Currently, the mainstream mobile CPU supports the texture format (ARM mobile processor is actually a SoC On-chip system, which integrates the CPU unit, GPU unit, and even communication unit. Therefore, when discussing game compatibility, the CPU is equal to the GPU ):
CPU |
GPU |
Texas Instruments |
Power VR Series |
Samsung Orion |
Mail Series |
Qualcomm |
Adreno Series |
NVIDIA |
Geforce Series |
Hith K3V2 |
Vivante GC |
- Imagination PowerVR: PVRTC, ETC1
- Representative Models: Apple iPhone, iPad, Samsung I9000, and P3100
- Qualcomm's Adreno series:
- Adreno 2xx series: 3Dc and ATITC (based on ATI)
- Adreno 320: ETC, 3Dc and ATITC (based on ATI), ETC2
- Representative Models: HTC G10, G14, Xiaomi 1, and Xiaomi 2
- ARM Mali series: ETC
- Series 300/400:
- T600 series: ASTC
- Representative Models: Samsung Galaxy ⅱ, Galaxy SIII, Galaxy Note1, Galaxy Note2 (sub version)
- NVIDIA Geforce series: ETC, S3TC (DXT1, DXT3, and DXT5)
- Tegra 2: ATITC
- Tegra K1: ASTC
- Representative Models: Google Nexus 7, HTC One X
- Vivante GC series: ETC, S3TC
Mainstream texture compression standards:
- ETC1: basic texture compression standard of OpenGL ES2.0, which is supported by most mobile GPUs. It does not support Alpha channels and can only be used to compress opaque materials, almost all Android devices support ETC-compressed GPU acceleration;
- ETC2: The texture compression format introduced by OpenGL ES 3.0 is still being improved. In addition to Qualcomm's Adreno 320, there is no support for mobile GPUs. This completes the problem that ETC1 does not support Alpha channels, supports high-quality RGBA (RGB + Alpha) compression;
- PVRTC:
- S3TC: it is also called DXTn or DXTC, which is a common DDS compression texture. Both compression speed and compression ratio are good, and GPU acceleration is also supported. It is also a common compression format for desktop graphics cards;
- EAC: Mainly used for 1-2 channel data;
- ASTC: compression speed and quality are better than S3TC;
To be continued...
Copyright Disclaimer: This article is an original article by the blogger and cannot be reproduced without the permission of the blogger.