YUV/RGB format and fast Conversion Algorithm

Source: Internet
Author: User
Tags color representation

1 Preface
The color of nature is ever changing. In order to give color a quantitative measurement standard, we need to establish a color space model to describe various colors, as human perception of color is a complex process of combination of physiological and psychological effects, in order to better and more accurately meet their needs in different application fields, there are various color space models to quantify the color description. We often see RGB, CMYK, yiq, YUV, and HSI.

For the digital electronic multimedia field, we often see the concept of color space, mainly RGB and YUV (in fact, these two systems contain many specific color Expressions and models, such as sRGB, Adobe RGB, yuv422, yuv420 ...), RGB describes the color based on the principle of the three-color + optical system, while YUV describes the color based on the principle of brightness and chromatic aberration.

Even if only the two color spaces of rgb yuv are involved, the knowledge involved is also very rich and complex, and I do not know enough professional knowledge. Therefore, this article mainly applies to the engineering field andAlgorithm.

2 YUV-related Color Space Model
2.1 YUV and yiq ycrcb
For the YUV model, we often confuse it with the yiq/ycrcb model.

In fact, the YUV model is used in the pal TV system. y indicates the brightness. UV is not short for any word.

The yiq model is similar to the YUV model for ntsc TV systems. The I and Q components in the yiq color space are equivalent to 33 degrees of rotation of the UV components in the YUV space.

The YCbCr color space is a color space derived from the YUV color space. It is mainly used in digital TV systems. In the conversion from RGB to YCbCr, the input and output are both octal binary formats.

The conversion equation between the three and RGB is as follows:

RGB-> YUV:

Actually:

Y = 0.30r + 0.59G + 0.11b, u = 0.493 (B-Y), V = 0.877 (R-Y)

RGB-> yiq:

RGB-> ycrcb:

From the formula, we need to understand that the UV/cbcr signal is actually the blue and red differential signals, in fact, it indirectly represents the intensity of blue and red. Understanding this is of great help for us to understand the color transformation process.

The YUV format we talked about in the digital electronic multimedia field is actually accurate, it is a family of color models (including yuv444/yuv422/yuv420/yuv420p) that have multiple storage formats based on the ycrcb color space model ). It is not a traditional YUV model used for palting analog TVs. The main difference between these YUV models is that the sampling and storage methods of UV data are not described here.

In camera sensor, the most common YUV model is the yuv422 format, because it uses four bytes to describe two pixels and is compatible with the rgb565 model. It facilitates the design of hardware and software interfaces of camera sensor and camera controller.

3 yuv2rgb Quick Algorithm Analysis
YUV actually refers to ycrcb. The conversion formula of yuv2rgb is very simple, but it involves floating point operations. Therefore, if you want to implement a fast algorithm, the algorithm structure itself has nothing to do with, mainly using integer operations or table queries to speed up the computation.
First, we can derive the conversion formula as follows:

R = Y + 1.4075 * (V-128)
G = Y-0.3455 * (u-128)-0.7169 * (V-128)
B = Y + 1.779 * (u-128)

3.1 integer Algorithm
Replace floating-point operations with integer operations. Of course, shift is required. We can easily obtain the following algorithms:

U = yuvdata [upos]-128;
V = yuvdata [VPOs]-128;

Rdif = V + (V * 103)> 8 );
Invgdif = (u * 88)> 8) + (V * 183)> 8 );
Bdif = u + (u * 198)> 8 );

R = yuvdata [ypos] + rdif;
G = yuvdata [ypos]-invgdif;
B = yuvdata [ypos] + bdif;

In order to prevent overflow, you also need to determine whether the calculation result is within the range of 0 to 25, and make a similar judgment.

If (r> 255)
R = 255;
If (r <0)
R = 0;

To convert data from rgb24 to rgb565, you also need to perform the shift and or operation:

Rgbdata [1] = (R & 0xf8) | (G> 5 ));
Rgbdata [0] = (G & 0x1c) <3) | (B> 3 ));

Part 1 lookup Method
The first thing you can think of in the lookup method is to use the lookup table to replace the multiplication operation in the preceding integer algorithm.

Rdif = fac_00004075 [u];
Invgdif = fac_m_0_3455 [u] + fac_m_0_7169 [v];
Bdif = fac_0000779 [u];

Here, a total of four 1-dimensional arrays are required. The subscript starts from 0 to 255. The table occupies about 1 k of memory space. UV does not need to be reduced by 128. It is enough to calculate the value of the corresponding array element in advance.

For each pixel, the partial lookup method uses the lookup table to replace two subtraction operations and four multiplication operations, and four shift operations. However, it still requires multiple addition operations, six comparison operations, and possible assignment operations. The speed of the operation is not significantly higher than that of the first method.

3.3 full lookup Method
Can the YUV table directly display the corresponding RGB values? At first glance, it seems unlikely that, taking the most complex g operations as an example, because g and YUV are related, therefore, for an algorithm similar to G = yuv2g [y] [u] [v], an array with a dimension of 256 must occupy about 16 MB of space to the power of 24 of 2, it is absolutely unacceptable. Currently, most of them adopt the partial lookup method.

However, if we analyze it carefully, we can find that we do not actually need to use a three-dimensional array for G, because y is only related to the result of UV calculation, and has nothing to do with the UV individual, therefore, we can use the second look-up table method to simplify the G operation into two-dimensional array look-up operations, as follows:

G = yig2g_table [y] [uv2ig_table [u] [v];

RB itself is only related to Yu or YV. Therefore, we need four 8*8 two-dimensional tables which occupy K of memory at the power of 4x2. It is basically acceptable. However, for embedded applications such as mobile phones, it is a little bigger.

Further analysis, we can see that in the case of mobile phones and other embedded applications, we ultimately want to convert the data into the rgb565 format and send it to the LCD screen for display. Therefore, for RGB, we don't need the accuracy of 8 bits at all. to unify the simplicity and operation, we only need 6 bits of data for each component, so we can further change the table to four 6*6 two-dimensional tables, so that we only need to occupy 16 K of memory! When calculating the table element value, you can complete the final overflow judgment in advance. The final algorithm is as follows:

Y = (yuvdata [y1pos]> 2 );
U = (yuvdata [upos]> 2 );
V = (yuvdata [VPOs]> 2 );

R = yv2r_table [y] [v];
G = yig2g_table [y] [uv2ig_table [u] [v];
B = yu2b_table [y] [u];

Rgbdata [1] = (R & 0xf8) | (G> 5 ));
Rgbdata [0] = (G & 0x1c) <3) | (B> 3 ));

In this way, we have added three shift operations to the partial lookup method, and further reduced four addition operations and six comparison assignment operations.

When calculating the element values in a table, it is necessary to consider factors such as rounding and offset to make the calculation intermediate results meet the non-negative requirements of the array downloading.

Compared with the first algorithm, the full table query method can significantly improve the final computing speed and the specific performance depends on the CPU computing speed of the platform and the relative ratio of memory access speed. The faster the internal storage access speed, the more obvious the performance improvement brought about by using the look-up table method. The performance of the test results on my PC can be improved by about 35%. On an ARM platform, the test only increased by about 15%.

3.4 Further Thoughts
In fact, the above algorithm:

Rgbdata [1] = (R & 0xf8) | (G> 5 ));
Rgbdata [0] = (G & 0x1c) <3) | (B> 3 ));

(R & 0xf8) and (B> 3) operations in the table can be calculated in advance. In addition, the value of YU/YV cannot actually cover the range of 6*6. Some vertices in the middle are never input-free, you can also use a 5*5 Table for RB operations. These may further increase the computation speed and reduce the size of small tables.

In addition, in the embedded application, if you try to put tables in High-Speed memory such as SRAM, you should be able to take advantage of the advantages of the lookup method than in the SDRAM.

4 rgb2yuv?
Currently, we cannot simplify the table search operation of a three-dimensional table to a two-dimensional table. Only some table queries can be used to replace the multiplication operation.

In addition, in most cases, we still need the yuv2rgb conversion, because the data obtained from the sensor usually uses YUV data. In addition, JPG and MPEG are actually encoded Based on YUV format, therefore, to display decoded data, the yuv2rgb operation is required.
The following are extracted from DirectShow:Article
The color display principle of Computer Color Display is the same as that of color TV sets. The principle of adding and Mixing colors of R (red), g (green), and B (blue) is used: the red, green, and blue phosphor on the inside of the screen are colored by emitting three different intensity electronic sub-bundles. This color representation method is called RGB Color Space Representation (it is also the most commonly used Color Space Representation Method in multimedia computer technology ).
Based on the three-color principle, any color-light F can be mixed with R, G, and B of different components.

F = R [R] + G [g] + B [B]

R, G, and B are the coefficients of the three colors involved in the mixing. When the three base colors are 0 (weakest), the light is mixed with black light. When the three base colors are K (strongest), the light is mixed with white light. By adjusting the values of the R, G, and B coefficients, you can mix various colors between the black light and the white light.
So where does YUV come from? In Modern Color TV systems, three-channel color cameras or color CCD cameras are usually used for video recording, and then the color image signals are divided by color, respectively amplified and corrected to RGB, after the matrix conversion circuit, the Brightness Signal y and two color difference signal R-Y (U) and B-Y (v) are obtained. Finally, the sending end encodes the brightness and color difference Signals respectively, send data over the same channel. The representation of this color is the so-called YUV color space representation.
The importance of using YUV color space is that its brightness signal y is separated from the color signal u and v. If only the Y Signal component is used, but not the U or V component, the black and white gray images are displayed. The color TV uses the YUV space to solve the compatibility problem between the color TV and the black and white TV with the Brightness Signal y, so that the black and white TV can also receive the color TV signal.
The conversion formula between YUV and RGB is as follows (the RGB value ranges from 0 to 255 ):

Y = 0.299r + 0.587G + 0.114b
U =-0.147r-0.289G + 0.436b
V = 0.615r-0.515g-0.100b

R = Y + 1.14 V
G = Y-0.39u- 0.58 V
B = Y + 2.03u

In DirectShow, common RGB formats include rgb1, rgb4, rgb8, rgb565, rgb555, rgb24, rgb32, and argb32; common YUV formats include yuy2, yuyv, yvyu, uyvy, ayuv, y41p, y411, y211, if09, iyuv, yv12, yvu9, yuv411, and yuv420. As the auxiliary description type of the video media type (subtype), their corresponding guids are shown in table 2.3.

Table 2.3 common RGB and YUV formats

Guid format description
Mediasubtype_rgb1 2 color, each pixel is represented by 1 bit, requires a color palette
Mediasubtype_rgb4 16 color, each pixel is represented by four bits, the color palette is required
Mediasubtype_rgb8 256 colors, each pixel is represented by 8 bits, and a color palette is required.
Mediasubtype_rgb565 each pixel is represented by 16 bits. The RGB component uses 5 bits, 6 bits, and 5 BITs respectively.
Mediasubtype_rgb555 each pixel is represented by 16 bits, and the RGB component uses 5 bits (the remaining 1 bits are not needed)
Mediasubtype_rgb24 each pixel is represented by 24 bits, and each RGB component uses 8 bits
Mediasubtype_rgb32 each pixel is represented by 32 bits, and each RGB component uses 8 bits (the remaining 8 bits are not needed)
Mediasubtype_argb32 each pixel is represented by 32 bits, and each RGB component uses 8 bits (the remaining 8 bits are used to represent the alpha channel value)
Mediasubtype_yuy2, in the format
Mediasubtype_yuyv yuyv format (the actual format is the same as that of yuy2)
Mediasubtype_yvyu yvyu format, packaged
Mediasubtype_uyvy uyvy format, packaged
Mediasubtype_ayuv 4: 4 YUV with alpha channel format
Mediasubtype_y41p y41p format, packaged
Mediasubtype_y411 y411 format (the actual format is the same as that of y41p)
Mediasubtype_y211 y211 format
Mediasubtype_if09 if09 format
Mediasubtype_iyuv iyuv format
Mediasubtype_yv12 yv12 format
Mediasubtype_yvu9 yvu9 format

The following describes various RGB formats.

Rgb1, rgb4, and rgb8 are color palette RGB formats. when describing the format details of these media types, the bitmapinfoheader data structure is followed by a color palette (defining a series of colors ). Their image data is not a true color value, but an index of the current pixel color value in the palette. Take rgb1 (2-color Bitmap) as an example. For example, if the two color values defined in the palette are 0x000000 (black) and 0 xffffff (white), then the image data is 001101010111... (Each pixel is represented by one digit.) The color of each pixel is black, black, white, black, white, black, black, and white ....

22 rgb565 uses 16 bits to represent a pixel. 5 bits in the 16 bits are used for R, 6 bits are used for G, and 5 bits are used for B.ProgramA word (word, a word is equal to two bytes) is usually used to operate a pixel. After a pixel is read, the meaning of each bit of the word is as follows:
High byte and low byte
R g B
The value of each RGB component can be obtained by combining the shielded word and the shift operation:

# Define rgb565_mask_red 0xf800
# Define rgb565_mask_green 0x07e0
# Define rgb565_mask_blue 0x001f
R = (wpixel & rgb565_mask_red)> 11; // value range: 0-31
G = (wpixel & rgb565_mask_green)> 5; // value range: 0-63
B = wpixel & rgb565_mask_blue; // value range: 0-31

Rgb555 is another 16-bit RGB format. The RGB component is represented by 5 bits (the remaining 1 bits are not used ). After reading a pixel with a word, the meaning of each bit of the word is as follows:
High byte and low byte
X r r g B (X indicates no, can be ignored)
The value of each RGB component can be obtained by combining the shielded word and the shift operation:

# Define rgb555_mask_red 0x7c00
# Define rgb555_mask_green 0x03e0
# Define rgb555_mask_blue 0x001f
R = (wpixel & rgb555_mask_red)> 10; // value range: 0-31
G = (wpixel & rgb555_mask_green)> 5; // value range: 0-31
B = wpixel & rgb555_mask_blue; // value range: 0-31

Rgb24 uses 24 bits to represent a pixel, and RGB is represented by 8 bits. The value range is 0-255. Note that the order of RGB components in the memory is BGR .... Generally, you can use the rgbtriple data structure to operate a pixel, which is defined:

Typedef struct tagrgbtriple {
Byte rgbtblue; // blue weight
Byte rgbtgreen; // green component
Byte rgbtred; // red weight
} Rgbtriple;

Rgb32 uses 32 bits to represent a pixel. Each RGB component uses 8 bits. The remaining 8 bits are used as the alpha channel or are not used. (Argb32 is rgb32 with alpha channel .) Note that the order of RGB components in the memory is bgra .... Generally, you can use the rgbquad data structure to operate a pixel, which is defined:

Typedef struct tagrgbquad {
Byte rgbblue; // blue weight
Byte rgbgreen; // green component
Byte rgbred; // red weight
Byte rgbreserved; // reserved bytes (used as alpha channel or ignored)
} Rgbquad;

The following describes various YUV formats. The YUV format generally has two categories: packed format and planar format. The former stores the YUV component in the same array, and usually several adjacent pixels form a macro pixel (macro-pixel ); the latter uses three arrays to separate and store three YUV components, just like a three-dimensional plane. In table 2.3, yuy2 to y211 are both in the packaging format, while if09 to yvu9 are in the flat format. (Note: When introducing various formats, each YUV component carries a subscript. For example, y0, U0, and V0 indicate the yuv component of the first pixel, y1, u1, and V1 indicate the YUV component of the second pixel, and so on .)

Yuy2 (and yuyv) format retains the Y component for each pixel, while the UV component samples every two pixels horizontally. A macro pixel is 4 bytes, which actually represents 2 pixels. (Means that a macro pixel contains four Y components, two U components, and two v components .) The YUV component order in the image data is as follows:
Y0 U0 Y1 V0 Y2 U2 Y3 V2...

The yvyu format is similar to yuy2, except that the order of YUV components in the image data is different:
Y0 V0 Y1 U0 Y2 V2 Y3 U2...

The format of uyvy is similar to that of yuy2, but the order of YUV components in the image data is different:
U0 y0 V0 Y1 U2 Y2 V2 Y3...

The ayuv format has an alpha channel and extracts YUV components for each pixel. The image data format is as follows:
A0 y0 U0 V0 A1 Y1 U1 V1...

The y41p (and y411) format retains the Y component for each pixel, while the UV component samples every 4 pixels horizontally. A macro pixel is 12 bytes, which actually represents 8 pixels. The YUV component order in the image data is as follows:
U0 y0 V0 Y1 U4 Y2 V4 Y3 Y4 Y5 y6 Y8...

In the y211 format, the Y component is sampled every two pixels in the horizontal direction, while the UV component is sampled every four pixels. A macro pixel is 4 bytes, which actually represents 4 pixels. The YUV component order in the image data is as follows:
Y0 U0 Y2 V0 Y4 U4 y6 V4...

The yvu9 format extracts Y components for each pixel. When UV components are extracted, the image is first divided into several 4x4 macro blocks, then, each Macro Block extracts one u component and one V component. When storing image data, the first is the array of Y components of the entire image, followed by the U Component Array and the V Component Array. The if09 format is similar to yvu9.

The iyuv format extracts the Y component for each pixel. When the UV component is extracted, the image is first divided into several 2x2 macro blocks, then, each Macro Block extracts one u component and one V component. The yv12 format is similar to that of iyuv.

The yuv411 and yuv420 formats are mostly used in DV data. The former is used in NTSC and the latter is used in palth. Yuv411 extracts the Y component for each pixel, while the UV component samples every four pixels horizontally. Yuv420 does not mean that the V component sampling is 0. Compared with yuv411, it increases the color difference sampling frequency by one time in the horizontal direction, and reduces the color difference sampling by half at the U/V interval in the vertical direction, 2.12.

Link: http://hi.baidu.com/freedom%5Fasic/blog/item/8e2027eafbce87d5d539c9fa.html

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.