Gl_dither jitter Algorithm

Source: Internet
Author: User

For systems with less available colors, you can increase the number of available colors by changing the color value at the cost of resolution. The jitter operation is related to hardware. OpenGL allows the programmer to enable or disable the jitter operation. In fact, if the resolution of the machine is already quite high, activating the jitter operation is meaningless at all. To activate or cancel jitter, use the gl_dither and gldisable (gl_dither) functions. By default, jitter is activated.


Jitter Algorithm

In response to a friend's question, I wrote an article explaining the principle of the jitter algorithm. I regret my friend's request. First of all, my level is not high. Once again, my work is tight and I have little time to study this stuff. In any case, I finally wrote this article. You are welcome to criticize it.

 

My friend's question focuses on the jitter of color images. Here, we will talk about gray images for ease of interpretation. And then switch into the color image jitter. I saw a good article on the Internet. I posted it here to help me understand it.

 

About the jitter algorithm, we generally mention the pattern method.

Patterning indicates that the gray scale can be expressed by a certain proportion of black and white points to achieve the gray scale of the overall image. The location of the black and white points is illustrated.

 

One important indicator of resolution, computer display, printer, scanner, and other devices is resolution. Unit: DPI (DOT per inch), that is, the number of points per inch, the more points, the higher the Resolution, the clearer the image. Let's calculate how high the Computer Display resolution is. Set the display to 15 inch (the diagonal line length) and display a maximum of 1280x1024 points. Because the aspect ratio is 4-3, the width is 12 inch, the height is 9 inch, the horizontal resolution of the display is 106 DPI, and the vertical resolution is 113.8 DPI. The resolution of a general laser printer is 300 dpi x 300 dpi, 600 dpi x 600 dpi, and 720 DPI x 720 DPI. Therefore, the image displayed by the computer is much clearer than that displayed by the computer. The resolution of the scanner is higher than that of the digital camera.

 

Let's get down to the truth. As we have mentioned above, we use a pattern to represent the gray scale of a pixel, so let's make a calculation question. Suppose there is a 240x180x8 bit grayscale image. When a 12.8 x DPI laser printer is used to print it to x 9. how big is the pattern of each pixel on 6 inch of paper?

 

This question is very simple. This paper can be used at most (300 × 12.8) × (300 × 9.6) = 3840 × 2880 points, so each pixel can use (3840/240) × (2880/180) it is represented by a pattern of 16x16 vertices, that is, 256 vertices in a pixel. If there is no black dot in the 16 × 16 square, the gray scale is 256; if there is a black dot, the gray scale is 255; and so on, when it is black, the gray scale is 0. In this way, the 16 × 16 square can represent a gray scale of 257, which is one more gray scale than the required 8-bit total of 256 levels. Therefore, the gray level of the image above can be completely printed.

 

There is a problem with the pattern structure, that is, where is the black spot? For example, if there is only one black point, we can play it in the center or in the upper left corner of 16 × 16. The pattern can be regular or irregular. In general, regular patterns can avoid the clustering of dots than the instant patterns, but sometimes lead to obvious lines in the image.

 

As shown in, a 2 × 2 pattern can represent 5 levels of gray,

 

 

When the image has a gray area of 1, there are obvious horizontal and vertical lines as shown below.

 

 

 

If you want to store a 256-level grayscale pattern, you need a 256 × 16 × 16 Binary dot matrix, which occupies considerable space. There is a better way: store only one integer matrix, called a standard pattern, where each value ranges from 0 to 255. The actual gray scale of the image is compared with each value in the array. When the value is greater than or equal to the gray scale, the corresponding point is blacklisted. The following is an example of a level 25 gray scale.


On the left is the standard pattern, on the right is the grayscale pattern of 15, a total of 10 black spots, 15 white spots. In fact, the principle is very simple. When the gray scale is 0, it is all black spots. When the gray scale increases by 1, a black spot is reduced. It should be noted that the 5x5 pattern can represent 26 gray scales. When the gray scale is 25, it is the full white point, rather than the gray scale is 24. The following describes an algorithm for designing standard patterns, which was proposed by limb in 1969.

 

Start with a 2 × 2 matrix: Set


Recursive relationships include:



Both Mn and UN are 2n x 2n phalanx, and all the elements of UN are 1. According to this algorithm, we can obtain


It is a standard pattern with 16 levels of gray.


M3 (8x8 arrays) is special, called the Bayer jitter table. M4 is a 16 × 16 matrix.


According to the above algorithm, if an M3 pixel is represented by an 8x8 pattern, a n x n graph will be 8n x 8n. If M4 is used, it is even worse, it becomes 16n × 16n. Can we use graphic technology to keep the source image size? A natural idea is: if the M2 array is used, a point is taken from every 8x8 points in the source image, that is, the source image is re-sampled, and then the graphic technology is applied to keep the original image large.
Small. In fact, this method is not feasible. First, you don't know which one of the 8x8 points is suitable. In addition, the gap between the 8x8 is too large, and the generated image and the source image must be quite different, just like the rightmost image.

 

We can adopt this approach: assume that the source image is a gray scale of 256, and use the Bayer jitter table to do the following processing:

If (G [y] [x]> 2)> Bayer [Y & 7] [x & 7] then hitting a white point else hitting a Black Point
X and Y represent the pixel coordinates of the source image, and g [y] [x] represent the gray scale of the source image. First, the gray scale shifts the two places to 64 levels, and then performs the X and Y modulo 8 operations to find the corresponding points in the Bayer table. The two are compared and processed according to the above criteria.
We can see that the modulo 8 Operation divides the source image into 8x8 small pieces, each of which corresponds to the 8x8 Bayer table. Each vertex in the small part is compared, which avoids the problem of excessive point selection and block Division mentioned above. In essence, the modulo 8 operation introduces random components. This is the jitter technology we will talk about below.


This algorithm is used to obtain the result from the M3 (Bayer jitter table) array. The result is obtained by using the M4 array. It can be seen that the difference between the two is not great, therefore, you can use the Bayer table.

 


 

Let's consider a worse situation: Even if graphic technology is used, there is still no required gray level. Example: assume there is a 600x450X8 bit grayscale image. When a laser printer with a resolution of 6 inch DPI x DPI is used to print the image to 8 x paper, each pixel can be represented by a pattern (2400/600) × (1800/450) = 4 × 4 vertices. up to 17 Levels of gray scale can be represented, which cannot meet the requirement of 256 levels of gray scale. There are two solutions: (1) Reduce the image size from 600x450 to 150x113; (2) reduce the gray level of the image from 256 to 16. Both solutions are not ideal. In this case, we can use dithering technology to solve this problem. In fact, the algorithm just presented is a jitter algorithm called regular
Dithering ). The advantage of Rule jitter is that the algorithm is simple, and the disadvantage is that pattern-based operations introduce random components, but they are still regular. In addition, when comparing between points, it is not ideal to set white points as long as the value of the point on the standard pattern is greater than that on the standard pattern, because if the gray value of the standard pattern point is small, when the gray scale of the midpoint of an image is larger than that of the image, the points in the image are closer to black rather than white. A better method is to spread this error to neighboring pixels.


The Floyd-steberger algorithm introduced below adopts this scheme.
Assume that the gray level ranges from B (black) to W (white), the center T is (B + W)/2, corresponding to the gray level of 256, B = 0, W = 255, T = 127.5. If the gray level of the pixel in the source image is g and the error value is E, the value of the corresponding pixel in the new image is obtained using the following method:
If G> T then
White spots
E = G-W
Else
Black Spots
E = G-B


Add 3/8 × E to the right Pixel
3/8 × E pixels added to the bottom
1/4 × E to the lower right Pixel
The meaning of the algorithm is quite clear: taking a gray scale of 256 level as an example, if the gray scale of a point is 130, it should be a gray point in the gray scale map. Generally, the gray scale of an image changes continuously, and the gray scale of the adjacent pixel is very close to that of the local pixel. Therefore, this point and its surroundings should be in a gray area. In the new figure, the value 130 is greater than 128, so the white point is played, but the error 130 is far from the real white point 255, with the error E = 130-255 =-125 relatively large. Add 3/8 × (-125) to the adjacent pixels, so that the value of the adjacent pixels is close to 0 and the value of the adjacent pixels is blacklisted. Next time, e becomes positive, so that the adjacent pixels of adjacent pixels are white. In this way, white, black, and white are displayed as gray. If no error is transmitted, It is white. For another example, if the gray scale of a point is 250, the gray scale should be a white point, and the point and surrounding area should be a white area. In the new image, although E =-5 is negative, its value is very small and has little impact on adjacent pixels. Therefore, a white area can still be played. This verifies the correctness of the algorithm.

 

In other cases, you can repeat it by yourself. Is a diagram generated by using the Floyd-steberger algorithm jitter.

Here I will review it. Although I am engaged in C/C ++, in order to be lazy, I should first use MATLAB to solve the problem ......

Here is the Bayer jitter algorithm of Matlab, which is used to shake a gray image of 256 level into a black-and-white image of the same size.


Clear;
CLC;
M1 = [[0 2]; [3 1];
U1 = ones (2, 2 );
M2 = [[4 * m1 4 * m1 + 2 * U1]; [4 * m1 + 3 * U1 4 * m1 + U1]
U2 = ones (4, 4 );
M3 = [[4 * M2 4 * M2 + 2 * U2]; [4 * M2 + 3 * U2 4 * M2 + U2]
I = imread('test.bmp ');

GI =. 2989 * I (:,:, 1 )...
+. 5870 * I (:,:, 2 )...
+. 1140 * I (:,:, 3 );
% Imshow (GI );
% R = I (:,:, 1 );
% G = I (:,:, 2 );
% B = I (:,:, 3 );

[H w] = size (GI );

BW = 0;
For I = 1: H
For j = 1: W
If (GI (I, j)/4> m3 (bitand (I, 7) + 1, bitand (J, 7) + 1 ))
BW (I, j) = 255;
Else
BW (I, j) = 0;
End
End
End
Imshow (BW );

 

For more information about the algorithm, see jitter algorithm discussion 1.


Here we will talk about how to change a 24-bit true color image to a 15-bit or 16-bit image. Or use the Floyd-steberger algorithm. Let's talk about the composition of 15 BITs, which are 5 bits in red, 5 bits in green, 5 bits in blue, and 1 bits in reserve. The color of 16 bits, red, green, and blue are 5 bits, 6 bits, and 5 BITs respectively. This is because the human eyes have a stronger sense of gray than the color, and green contributes the most to the gray level. Therefore, one more bit of green quantization will enrich the gray level representation.


Clear;
CLC;
I = imread('0001.jpg ');
IMG = double (I); % convert the image
[H w] = size (IMG (:,:, 1); % get the image size

D = 1; When % is 1, the error is transferred from left to right. If it is-1, the error is transmitted from right to left.
Re = 0;
GE = 0;
Be = 0;
Rs = 8; % 2 ^ n, n = 3 indicates that the red quantization level is reduced to 2 ^ 5 = 32.
GS = 8; % 2 ^ n, n = 3 indicates that the green quantization level is reduced to 2 ^ 5 = 32.
BS = 8; % 2 ^ n, n = 3
To reduce the blue quantization level to 2 ^ 5 = 32.
For I = 1: H
For j = 1: W
If (D = 1)
Val = RS * Fix (IMG (I, j, 1)/RS );
Re = IMG (I, j, 1)-val;
IMG (I, j, 1) = val;

Val = GS * Fix (IMG (I, j, 2)/GS );
GE = IMG (I, j, 2)-val;
IMG (I, j, 2) = val;

Val = BS * Fix (IMG (I, j, 3)/BS );
Be = IMG (I, j, 3)-val;
IMG (I, j, 3) = val;

If (J + 1) <= W) % calculate the transfer of the error to the right Pixel
IMG (I, j + 1, 1) = IMG (I, j + 1, 1) + re* 3/8;
IMG (I, j + 1, 2) = IMG (I, j + 1, 2) + ge * 3/8;
IMG (I, j + 1, 3) = IMG (I, j + 1, 3) + be * 3/8;
End
If (I + 1) <= h) % calculate the pixel transfer of the error to the lower side
IMG (I + 1, J, 1) = IMG (I + 1, J, 1) + RE * 3/8;
IMG (I + 1, J, 2) = IMG (I + 1, J, 2) + ge * 3/8;
IMG (I + 1, J, 3) = IMG (I + 1, J, 3) + be * 3/8;
End
If (I + 1) <= H & (J + 1) <= W) % calculate the pixel transfer of the error to the lower right side
IMG (I + 1, J + 1, 1) = IMG (I + 1, J + 1, 1) + RE/4;
IMG (I + 1, J + 1, 2) = IMG (I + 1, J + 1, 2) + ge/4;
IMG (I + 1, J + 1, 3) = IMG (I + 1, J + 1, 3) + be/4;
End
Else
Val = RS * Fix (IMG (I, W-J + 1, 1)/RS );
Re = IMG (I, W-J + 1, 1)-val;
IMG (I, W-J + 1, 1) = val;

Val = GS * Fix (IMG (I, W-J + 1, 2)/GS );
GE = IMG (I, W-J + 1, 2)-val;
IMG (I, W-J + 1, 2) = val;

Val = BS * Fix (IMG (I, W-J + 1, 3)/BS );
Be = IMG (I, W-J + 1, 3)-val;
IMG (I, W-J + 1, 3) = val;

If (W-j)> 0) % calculate the error transfer to the left side of the Error
IMG (I, W-J, 1) = IMG (I, W-J, 1) + re* 3/8;
IMG (I, W-J, 2) = IMG (I, W-J, 2) + ge * 3/8;
IMG (I, W-J, 3) = IMG (I, W-J, 3) + be * 3/8;
End
If (I + 1 <= h) % calculates the pixel error transfer to the lower side of the Error
IMG (I + 1, J, 1) = IMG (I + 1, J, 1) + RE * 3/8;
IMG (I + 1, J, 2) = IMG (I + 1, J, 2) + ge * 3/8;
IMG (I + 1, J, 3) = IMG (I + 1, J, 3) + be * 3/8;
End
If (I + 1) <= H & (W-j)> 0) % calculate the pixel error transfer of the error to the lower left side
IMG (I + 1, W-J, 1) = IMG (I + 1, W-J, 1) + RE/4;
IMG (I + 1, W-J, 2) = IMG (I + 1, W-J, 2) + ge/4;
IMG (I + 1, W-J, 3) = IMG (I + 1, W-J, 3) + be/4;
End
End
End
D =-D;
End
Out = uint8 (IMG );
Imshow (out)

 

This is the original image to be processed:

 

 

This is the processed image:


If we convert a 24-bit real-color image to a lower-Quantization Image, we still use the Floyd-steberger algorithm, but this algorithm is not very effective. The reason is that when an image with rich colors is converted to a low color, the error transfer is used, it also produces a large error because no proper color is selected to indicate the adjacent color. In the previous article, we talked about converting a 24-bit image to a 15-bit image and selecting the color. We can see that r = R & 0xf8 is directly used. In fact, we cut off the low three-digit information of the image directly, and then transmit the low three-digit information loss as an error to the surrounding pixels. This method is acceptable for quantization levels such as 15bit. However, if a 24-bit image changes to an 8-bit image, this algorithm will produce a relatively large error. It is not a problem with the Floyd-steberger algorithm, but is not suitable for color selection. Even if there is an error transfer, the image is still distorted.

 

The following describes how to select three colors:

 


  • Popular color Algorithms

The basic idea of the popular color algorithm is to perform statistical analysis on the number of occurrences of all colors in the color image and create
An array is a statistical histogram that represents the color and color occurrence frequency. After the histogram array is sorted in the order of decreasing frequency, the first 256 colors in the histogram are the 256 colors with the most frequent display (the maximum frequency) in the image, use them as the color of the palette. This algorithm uses a statistical histogram to analyze the color occurrence frequency. Therefore, it is also called a color histogram statistical algorithm. Other colors in the image map to the nearest 256 color palette colors based on the minimum distance principle in the RGB color space. The popular color algorithm is easy to implement and can produce good results for images with a small number of colors. However, the main defect of this algorithm is that some images appear frequently, however, the visual effect on the human eye is quite obvious, and the information will be lost. For example, the high-brightness spots in an image appear less frequently,
It is likely that the algorithm cannot be selected and will be lost.

 

  • Median Splitting Algorithm

 

The basic idea of the median splitting algorithm is: In the RGB color space, the R, G, and B color pairs correspond to the three coordinate axes of the space, and each coordinate axis is quantified to 0-255. Corresponding to the dark (black), 255 corresponds to the brightest, thus forming a color cube with a side length of 256. All possible colors correspond to a point in the cube. The colored cube is divided into 256 small cubes, each containing the same number of color points that appear in the image; take out the center of each small cube, and the color represented by these points is the 256 colors that we need most to represent the color features of the image.
The median splitting algorithm was proposed by pauj heckbert in the early 1980s s and is now widely used in image processing. The disadvantage of this algorithm is that it involves complex sorting and has a high memory overhead.

 

  • Octotree Algorithm

1988, M. gervautz and W. purgathofer published a paper entitled "A Simple Method for Color quantization: REE quantization", and proposed a new color quantization algorithm using the data structure of the tree, it is generally called the octree Color quantization algorithm. This algorithm is more efficient than the median Splitting Algorithm and has less memory overhead.


The basic idea of the Color quantization algorithm is to distribute the RGB color values used in the image to the layered tree. The depth of the tree can reach nine layers, that is, the root node layer is added with each eight-layer node that represents the 8-bit R, G, and B values respectively. The lower node layer corresponds to the bit of the less important RGB value (the bit on the right). Therefore, to improve efficiency and save memory, you can remove the 2 ~ Layer 3. This will not have a big impact on the results. Leaf node encoding stores the number of pixels and the values of R, G, and B color components. The middle node forms the path from the top to the leaf node. This is an efficient storage method, which can store the colors and the number of occurrences of images, and does not waste memory to store colors that do not appear in images.


Scan all pixels of an image and place it in the October tree for each new color and create a leaf node. After the image is scanned, if the number of leaf nodes is greater than the number of colors required by the color palette, You need to merge some leaf nodes into the previous node and convert the nodes into leaf nodes, the number of times the color is stored and displayed. In this way, the number of leaf nodes is reduced until the number of leaf nodes is equal to or less than the number of colors required by the color palette. If the number of leaf nodes is less than or equal to the number of colors required by the color palette, You can traverse the eight-tree and fill in the color table of the leaf node.

 

 

I personally recommend the eight-tree algorithm, but there is a simpler algorithm than the eight-tree algorithm, that is, the fixed color table algorithm. The general idea is:

Select a better color table as a fixed color table. Then, regardless of the image, each color selects the nearest color in the color table. This will inevitably produce an error, which can be transmitted to the surrounding pixels using the Floyd-steberger algorithm. The effect is good and the speed is fast.

 

The key to this algorithm is how to quickly find the nearest color in the color table. It's too late to get dizzy ......

References:

1. OpenGL frame Cache

Http://blog.csdn.net/skyman_2001/article/details/253954

2. jitters 1, 2, and 3

Http://blog.csdn.net/coolbacon/article/details/4041988

Http://blog.csdn.net/coolbacon/article/details/4042054

Http://blog.csdn.net/coolbacon/article/details/4042122


If we convert a 24-bit real-color image to a lower-Quantization Image, we still use the Floyd-steberger algorithm, but this algorithm is not very effective. The reason is that when an image with rich colors is converted to a low color, the error transfer is used, it also produces a large error because no proper color is selected to indicate the adjacent color. In the previous article, we talked about converting a 24-bit image to a 15-bit image and selecting the color. We can see that r = R & 0xf8 is directly used. In fact, we cut off the low three-digit information of the image directly, and then transmit the low three-digit information loss as an error to the surrounding pixels. This method is acceptable for quantization levels such as 15bit. However, if a 24-bit image changes to an 8-bit image, this algorithm will produce a relatively large error. It is not a problem with the Floyd-steberger algorithm, but is not suitable for color selection. Even if there is an error transfer, the image is still distorted.

 

The following describes how to select three colors:

 


  • Popular color Algorithms

The basic idea of the popular color algorithm is to perform statistical analysis on the number of occurrences of all colors in the color image and create
An array is a statistical histogram that represents the color and color occurrence frequency. After the histogram array is sorted in the order of decreasing frequency, the first 256 colors in the histogram are the 256 colors with the most frequent display (the maximum frequency) in the image, use them as the color of the palette. This algorithm uses a statistical histogram to analyze the color occurrence frequency. Therefore, it is also called a color histogram statistical algorithm. Other colors in the image map to the nearest 256 color palette colors based on the minimum distance principle in the RGB color space. The popular color algorithm is easy to implement and can produce good results for images with a small number of colors. However, the main defect of this algorithm is that some images appear frequently, however, the visual effect on the human eye is quite obvious, and the information will be lost. For example, the high-brightness spots in an image appear less frequently,
It is likely that the algorithm cannot be selected and will be lost.

 

  • Median Splitting Algorithm

 

The basic idea of the median splitting algorithm is: In the RGB color space, the R, G, and B color pairs correspond to the three coordinate axes of the space, and each coordinate axis is quantified to 0-255. Corresponding to the dark (black), 255 corresponds to the brightest, thus forming a color cube with a side length of 256. All possible colors correspond to a point in the cube. The colored cube is divided into 256 small cubes, each containing the same number of color points that appear in the image; take out the center of each small cube, and the color represented by these points is the 256 colors that we need most to represent the color features of the image.
The median splitting algorithm was proposed by pauj heckbert in the early 1980s s and is now widely used in image processing. The disadvantage of this algorithm is that it involves complex sorting and has a high memory overhead.

 

  • Octotree Algorithm

1988, M. gervautz and W. purgathofer published a paper entitled "A Simple Method for Color quantization: REE quantization", and proposed a new color quantization algorithm using the data structure of the tree, it is generally called the octree Color quantization algorithm. This algorithm is more efficient than the median Splitting Algorithm and has less memory overhead.


The basic idea of the Color quantization algorithm is to distribute the RGB color values used in the image to the layered tree. The depth of the tree can reach nine layers, that is, the root node layer is added with each eight-layer node that represents the 8-bit R, G, and B values respectively. The lower node layer corresponds to the bit of the less important RGB value (the bit on the right). Therefore, to improve efficiency and save memory, you can remove the 2 ~ Layer 3. This will not have a big impact on the results. Leaf node encoding stores the number of pixels and the values of R, G, and B color components. The middle node forms the path from the top to the leaf node. This is an efficient storage method, which can store the colors and the number of occurrences of images, and does not waste memory to store colors that do not appear in images.


Scan all pixels of an image and place it in the October tree for each new color and create a leaf node. After the image is scanned, if the number of leaf nodes is greater than the number of colors required by the color palette, You need to merge some leaf nodes into the previous node and convert the nodes into leaf nodes, the number of times the color is stored and displayed. In this way, the number of leaf nodes is reduced until the number of leaf nodes is equal to or less than the number of colors required by the color palette. If the number of leaf nodes is less than or equal to the number of colors required by the color palette, You can traverse the eight-tree and fill in the color table of the leaf node.

 

 

I personally recommend the eight-tree algorithm, but there is a simpler algorithm than the eight-tree algorithm, that is, the fixed color table algorithm. The general idea is:

Select a better color table as a fixed color table. Then, regardless of the image, each color selects the nearest color in the color table. This will inevitably produce an error, which can be transmitted to the surrounding pixels using the Floyd-steberger algorithm. The effect is good and the speed is fast.

 

The key to this algorithm is how to quickly find the nearest color in the color table. It's too late to get dizzy ......

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.