Basic Algorithms for image processing

Source: Internet
Author: User
Tags integer division ranges

1) convert an image with a resolution of 256*256 to a resolution of 128*128 to divide the source image into 2*2 sub-image blocks, and then
All the pixel colors of the sub-image block are set according to the color values of f (I, j) to reduce the resolution.
For example:
F (I, j) f (I, j + 1) f (I, j) f (I, j)
F (I + 1, J) f (I + 1, J + 1) to f (I, j) f (I, j)
(Similarly, an image with a resolution of 256*256 is converted to a 64*64 resolution, which can be divided into 4*4, and so on .)

2) r monochrome, G monochrome, B monochrome image, just remove the corresponding R, G, B values in each pixel of the image, and then use a similar
(R, R, R), (G, G, G), (B, B, B) pixels can be re-drawn.

3) Relationship between RGB color image and brightness y, chromatic aberration I, and signal value q
| Y | 0.31 0.59 0.11 | r |
| I | = | 0.60-0.28-0.32 | * | G |
| Q | 0.21-0.52-0.31 | B |

Y = 0.31r + 0.59G + 0.11b
I = 0.60r-0.28g-0.32b
Q = 0.21r-0.52b-0.31b

4) reversed Color Image Processing: replace the corresponding (R, G, B) pixels with (255-R, 255-G, 255-B)
Smooth Processing of color images: replaces the color of each pixel of an image with the average value of N x n pixels adjacent to it. For example, if a 3*3 lattice is set to f (I, j) and g (I, j) after smoothing
F (I-1, J-1) f (I-1, j) f (I-1, J + 1)
F (I, J-1) f (I, j) f (I, j + 1)
F (I + 1, J-1) f (I + 1, J) f (I + 1, J + 1)

G (I, j) = (f (I-1, J-1) + f (I-1, j) + f (I-1, J + 1) + f (I, J-1) + f (I, j) + f (I, j + 1) + f (I + 1, J-1) + f (I + 1, J) + f (I + 1, J + 1)/9

Here, we should pay attention to the situation of edge pixels to prevent cross-border.
Neon processing of color images: Taking the dot matrix 3 above as an example, the target pixel g (I, j) should be f (I, j) and F (I, the gradient of J + 1), F (I, j) and F (I + 1, J) are used as the R, G, and B components) the RGB components of are (R1, G1, B1), F (I, j + 1) (R2, G2, B2), F (I + 1, J) is (R3, G3, B3), g (I, j) is (R, G, B), then the result should be
R = 2 * SQRT (R1-R2) ^ 2 + (R1-R3) ^ 2)
G = 2 * SQRT (G1-G2) ^ 2 + (G1-G3) ^ 2)
B = 2 * SQRT (B1-B2) ^ 2 + (B1-B3) ^ 2)
Sharpening of color images: f (I, j) pixels (R1, G1, B1), F (I-1, J-1) pixels (R2, G2, B2 ), G (I, j) pixels are (R, G, B ),
R = R1 + 0.25 * | R1-R2 |
G = G1 + 0.25 * | G1-G2 |
B = b1 + 0.25 * | B1-B2 |
Embossed Color Image Processing: G (I, j) = f (I, j)-f (I-1, J) + constant. The constant here is usually 128.
Mosaic processing of color images: similar to smooth processing of color images, however, the difference is that all 3x3 target pixels are g (I, j ), instead of getting the average value of the pixels in The Matrix.
Grayscale processing of color images: r = R1/64*64g = G1/64*64 B = b1/64*64 Note that the division here is an integer division in programming.

5) geometric transformation of the image: translation, scaling, and rotation are all consistent in the resolution ry.

6) Image Filtering
● Convolution filtering principle is Y (N1, N2) = Sigma X (M1, M2) H (n1-m1, n2-m2) (the range of the two summation symbols is M1: 0 ~ N M2: 0 ~ N)
X (M1, M2) is the input image signal, H (n1-m1, n2-m2) is the response of the filtering system to the unit sampling sequence delta (N1, N2.
⊙ Low-pass filtering in general, the noise spectrum in the image is located in the area with a high spatial frequency, and the spatial domain low-pass filtering is used for smoothing noise. Frequently used low-pass Filtering
The 3*3 arrays of H (N1, N2) are as follows:
1/9 1/9 1/9
H (N1, N2) = 1/9 1/9 1/9
1/9 1/9 1/9
1/10 1/10 1/10
H (N1, N2) = 1/10 2/10 1/10
1/10 1/10 1/10
1/16 1/8 1/16
H (N1, N2) = 1/8 1/4 1/8
1/16 1/8 1/16
5*5 array low-pass filtering H (N1, N2) is used as follows:
1/35 1/35 1/35 1/35 1/35
1/35 2/35 2/35 2/35 1/35
H (N1, N2) = 1/35 2/35 3/35 2/35 1/35
1/35 2/35 2/35 2/35 1/35
1/35 1/35 1/35 1/35 1/35
⊙ Qualcomm filters the airspace. Qualcomm filters the image's low-frequency components to achieve zero loss or low loss of the image's high-frequency components. H (N1, N2), which is commonly used by airspace Qualcomm filter, is as follows:
0-1 0
H (N1, N2) =-1 5-1
0-1 0
-1-1-1
H (N1, N2) =-1 9-1
-1-1-1
1-2 1
H (N1, N2) =-2 5-2
0-2 1
● Enhanced processing
The horizontal line of the horizontal enhancement image is also a high-pass filter. An example of H (N1, N2) with horizontal enhancement is as follows:
0 0 0
H (N1, N2) = 0 0 0
-1 2-1
The vertical line of the vertical enhancement image is also a high-pass filter. An example of H (N1, N2) with horizontal enhancement is as follows:
-1 0 0
H (N1, N2) = 2 0 0
-1 0 0
⊙ Horizontal vertical enhancement image is also a high-pass filter. An example of H (N1, N2) with horizontal enhancement is as follows:
-1-1-1
H (N1, N2) =-1 8-1
-1-1-1

● Structure Filter
⊙ Parallel structure Filtering
Structure:

For example, when
0 0 0
H1 (N1, N2) = 0 0 0
-1 2-1
-1 0 0
H2 (N1, N2) = 2 0 0
-1 0 0
Then H (N1, N2) is
-1 0 0
H (N1, N2) = 2 0 0
-1 2-1
⊙ Series structure Filter
Structure:

For example, when
0 0 0
H1 (N1, N2) = 0 0 0
-1 2-1
-1 0 0
H2 (N1, N2) = 2 0 0
-1 0 0
Then H (N1, N2) is
1-2 1
H (N1, N2) =-2 4-2
1-2 1

7) special effect processing for image Switching
● Upper-and lower-part docking display
You only need to constantly depict the symmetric top and bottom lines of pixels at the same time.
● Display the interface between the left and right
You only need to constantly depict a column of pixels on the left and right sides of the symmetry at the same time.
● Display the four sides to the center
You only need to constantly wait for the advanced four sides to be depicted until they reach the center point.
● Display at the center to the four sides
You only need to constantly draw the four sides from the center point at the same time and draw them to the edge.
● Four-corner center display
In the upper left corner and lower right corner, the rows, column pixels, and center of the pixel are described along the main diagonal line and other advanced steps.
● Horizontal Deletion
Set the split length of L, and then describe the row pixels from the height of L, 2L, 3l.... Obviously, the height of the next step is L.
● Vertical Deletion
Set the split length of L, and then describe the column pixels from the width of L, 2L, 3l.... Obviously, the width of the next step is L.
● From left to right (from right to left)
Continuously depicts the column pixels from left to right (from right to left) until the edge
● From top down (from bottom up)
Continuously depicts the line pixels from top to bottom to the edge

8) Edge Detection
In image measurement and pattern recognition, it is the most common operation to extract lines from the image to detect the image edge or extract the image contour. So far, many mature algorithms have emerged. Such as the differential algorithm and mask algorithm. In a differential algorithm, N * n pixel blocks are often used, for example, 3*3 or 4*4. The pixel blocks of 3*3 are as follows,
F (I-1, J-1) f (I-1, j) f (I-1, J + 1)
F (I, J-1) f (I, j) f (I, j + 1)
F (I + 1, J-1) f (I + 1, J) f (I + 1, J + 1)
Let's set f (I, j) as the pixel to be processed, while G (I, j) as the processed pixel.
● Robert ts Operator
G (I, j) = SQRT (f (I, j)-f (I + 1, J) ^ 2 + (f (I + 1, J) -f (I, j + 1) ^ 2)
Or
G (I, j) = | f (I, j)-f (I + 1, J) | + | f (I + 1, J)-f (I, J + 1) |
● Sobel operator
For each pixel f (I, j) of a digital image, the weighted values of the gray scale in the upper, lower, left, and right neighborhoods are investigated, the weighted sum of the gray values of all Parties (0 degrees, 45 degrees, 90 degrees, 135 degrees) can be used as an output to extract the image edge.
G (I, j) = FXR + FYR, where
FXR = f (I-1, J-1) + 2 * F (I-1, j) + f (I-1, J + 1)-f (I + 1, J-1) -2 * f (I + 1, J)-f (I + 1, J + 1)
FYR = f (I-1, J-1) + 2 * f (I, J-1) + f (I + 1, J-1)-f (I-1, J + 1) -2 * f (I, j + 1)-f (I + 1, J + 1)
● LAPLACE OPERATOR
Laplace operator is a second-order differential operator. It has two forms: 4-neighbor Differential Operators and 8-neighbor differential operators.

⊙ 4 neighborhood Differentiation
G (I, j) = | 4 * f (I, j)-f (I, J-1)-f (I-1, j)-f (I + 1, J) -f (I, j + 1) |
⊙ 8 neighbor Differentiation
G (I, j) = | 8 * f (I, j)-f (I, J-1)-f (I-1, j)-f (I + 1, J) -f (I, j + 1)-f (I-1, J-1)-f (I-1, J + 1)-f (I + 1, J-1) -f (I + 1, J + 1) |

● Other common Operators
⊙ Right bottom edge extraction
When 3*3 is used, the expression is
G (I, j) = |-2 * f (I, J-1)-2 * F (I-1, j) + 2 * f (I + 1, J) + 2 * f (I, j + 1) |
⊙ Prewitt Edge Detection sample Operator
The Prewitt operator is an edge template operator that consists of eight sample directions. It can be 0 degrees, 45 degrees, 90 degrees, 135 degrees, 180 degrees, and 225 degrees.
And so on. Eight 3*3 edge templates and their directions are as follows:
90 degrees angle: 45 degrees angle:
1 1 1-1-1-1
1-2 1 1-2 1
-1-1-1 1 1 1
0 degree angle: 315 degree angle:
-1 1 1 1 1-1
-1-2 1 1-2-1
-1 1 1 1 1-1
270 degrees angle: 225 degrees angle:
1 1 1-1-1 1
-1-2 1-1-2 1
-1-1 1 1 1 1
180 degrees angle: 135 degrees angle:
1 1 1 1-1-1
1-2-1 1-2-1
1-1-1 1 1 1
The 3*3 expression is as follows:
A1 * F (I-1, J-1) A8 * f (I, J-1) A7 * f (I + 1, J-1)
A2 * F (I-1, j)-2 * f (I, j) A6 * f (I + 1, J)
A3 * F (I-1, J + 1) A4 * f (I, j + 1) A5 * f (I + 1, J + 1)
G (I, j) = |-2 * f (I, j) + A8 * f (I, J-1) + A1 * F (I-1, J-1) + A2 * F (I-1, j) + A3 * F (I-1, J + 1) + A4 * f (I, j + 1) + A5 * f (I + 1, J + 1) + A6 * f (I + 1, J) + A7 * f (I + 1, J-1) |
In the program design, samples are used in sequence to detect the image. The maximum value is given for the sample that is most similar to the tested area. The maximum value is used as the output value of the operator.
⊙ Robinson operator
The Robinson operator is a template operator consisting of eight sample directions. It can be 0 degrees, 45 degrees, 90 degrees, 135 degrees, 180 degrees, and 225 degrees.
And so on. Eight 3*3 edge templates and their directions are as follows:
90 degrees angle: 45 degrees angle:
1 2 1 0 1 2
0 0 0-1 0 1
-1-2-1-2-1 0
0 degree angle: 315 degree angle:
-1 0 1-2-1 0
-2 0 2-1 0 1
-1 0 1 0 1 2
270 degrees angle: 225 degrees angle:
-1-2-1 0-1-2
0 0 0 1 0-1
1 2 1 2 1 0
180 degrees angle: 135 degrees angle:
1 0-1 2 1 0
2 0-2 1 0-1
1 0-1 0-1-2
The usage is the same as that of the Prewitt operator.
⊙ Kirsch operator
The kirsch operator is a template operator that consists of eight edge samples. It can be 0 degrees, 45 degrees, 90 degrees, 135 degrees, 180 degrees, and 225 degrees.
And so on. Eight 3*3 edge templates and their directions are as follows:
90 degrees angle: 45 degrees angle:
5 5 5-3 5 5
-3 0-3-3 0 5
-3-3-3-3-3-3
0 degree angle: 315 degree angle:
-3-3 5-3-3-3
-3 0 5-3 0 5
-3-3 5-3 5 5
270 degrees angle: 225 degrees angle:
5 5-3-3-3-3
5 0-3 5 0-3
-3-3-3 5 5-3
180 degrees angle: 135 degrees angle:
5-3-3 5-3
5 0-3 5 0-3
5-3-3-3-3 3
The usage is the same as that of the Prewitt operator.
⊙ Smoothed Operator
The smoothed operator is a 3*3 operator.
|-1 0 1 | 1 1 1 |
DX = |-1 0 1 | DY = | 0 0 0 |
|-1 0 1 |-1-1-1 |
Then d = SQRT (dx ^ 2 + dy ^ 2) or D = | DX | + | dy |
Or dx (I, j) = f (I-1, J + 1) + f (I, j + 1) + f (I + 1, J + 1) -F (I-1, J-1)-f (I, J-1)-f (I + 1, J-1)
Dy (I, j) = f (I-1, J-1) + f (I-1, j) + f (I-1, J + 1)-f (I + 1, J-1) -f (I + 1, J)-f (I + 1, J + 1)

9) grayscale image processing
The so-called grayscale processing is to re-define the grayscale of the output image based on the grayscale of the monochrome image to improve the contrast of the image. The gray scale of a monochrome image is 256, 128, and 64. The following uses a 256-level monochrome image as an example.
Set the gray level of the source image to f (I, j), and the processed gray level to G (I, j)
● Reverse Processing
The same as the reversed Color Image Processing: G (I, j) = 255-f (I, j)
● Gray-level switch
The gray level switching relationship between the input and output gray level values is as follows:

● Increase the contrast
The higher the input grayscale value, the lower the corresponding output grayscale value. The gray value is reduced, and the image is dimmed to increase the contrast.

● Reduce contrast

● Improve contrast

● Enhance contrast

● Local Filtering
Local filtering is an image processing technique that uses the color values of pixels in 3x3 blocks to set the current pixel.
⊙ Average value filtering
Similar to smooth color image processing.
G (I, j) = (f (I-1, J-1) + f (I-1, j) + f (I-1, J + 1) + f (I, J-1) + f (I, j) + f (I, j + 1) + f (I + 1, J-1) + f (I + 1, J) + f (I + 1, J + 1)/9
Here, we should pay attention to the situation of edge pixels to prevent cross-border.
⊙ Minimum Filter
Minimum Filter refers to the image block in which N * m (for example, 3*3) pixels are cut out from the center of the current pixel f (I, j). G (I, j) minimum value of the gray value in the image block
⊙ Maximum Filter
Maximum Filter refers to the image block in which N * m (for example, 3*3) pixels are cut out from the center of the current pixel f (I, j). G (I, j) obtains the maximum value of the gray value in the image block.
⊙ Median Filter
Median Filter is an image block in which N * m (for example, 3*3) pixels are cut out from the center of the current pixel f (I, j). G (I, j) returns the median of all phased-order sequences in an image block.

10) grayscale image processing
● Binarization of grayscale images
⊙ Grayscale image histogram
For each gray value, a histogram is used to obtain the number of workers that have the gray value in the image .. A grayscale histogram is a grayscale function that describes the number of pixels with the same gray level in an image. The abscissa of a grayscale histogram is grayscale, And the ordinate is the occurrence frequency of the grayscale histogram (that is, the number of pixels ). The purpose of a histogram is to give a simple and visible indicator to determine whether an image uses all the permitted gray-level ranges reasonably. Generally, a digital image should take advantage of all or almost all possible gray-scale ranges. Generally, all or almost all possible gray levels should be used for a digital image; otherwise, the quantitative interval is increased. Once the level of a digital image is less than 255, the loss of information cannot be restored. If an image has a brightness that is beyond the range that the digital quantizer can process, the grayscale level is set to 0 or 255, resulting in spikes at either end or end of the histogram. The grayscale image histogram has some statistical feature parameters of the histogram, including the maximum gray scale, minimum gray scale, average and standard deviation.
⊙ Threshold value calculation and image binarization
The processing method of image binarization is as follows:
G (I, j) = 1; f (I, j)> = T
G (I, j) = 0; f (I, j) <t
Generally, g (I, j) = 1 indicates the image, and g (I,) = 0 indicates the background. The method for determining T is called que value selection.
● Binarization Algorithm for Grayscale Images
To find the threshold value, perform the following steps:
(1) Calculate the gray-level histogram of the input image (expressed by the gray-level probability function PHS (I)
(2) Calculate the Gray mean (AVE) Ave = sigma (I-1) * PHS (I) I: 0-> 255
(3) Calculate the gray-class mean (aver (k) and Class histogram (w (k ))
Aver (K) = sigma (I + 1) * PHS (I) I: 0-> K
W (K) = sigma (PHS (I) I: 1-> K
(4) computing class separation indicators
Q (K) = {[Ave * w (k)-aver (k)] ^ 2)}/[W (k) * (1-W (k)]}
(5) Find the best K threshold for maximizing Q: t = k-1
⊙ Grayscale Slicing Method

Set all pixels in a gray-level range of the input image to 0 (black), and all other pixels in the gray-level to 255 (white), then generate black and white
Binary Image.
⊙ Binarization of gray-scale Films

Set all pixels of the input image in a gray-scale range of two equal widths to 0 (black), and all pixels of other gray-scale levels to 255 (white ), the black and white binary images are generated.
⊙ Linear binarization

Set all the pixels of the input image in a gray level to 0 (black), and all the other pixels in the gray level to 1/2 of the original value, then generate a black and white binary image, separate the image from the background.

● Binary Image Processing
Binary Image Processing is to modify a binary image to make it more suitable for image measurement. Binary Image Processing includes the following operations:
Expansion makes the particles larger. After the image is expanded and then scaled, the image's grooves can be corrected.
Shrink to make the particles smaller. After the image is scaled down and then expanded, the convex slot of the image can be corrected.
Clear an isolated point to clear an object consisting of one pixel and to correct a hole consisting of one pixel.
Purge a particle to clear objects under any area
Clear super large particles to clear objects above any area
Fill the cave in any range
⊙ 4. Contraction of neighbors
4. The principle of neighborhood shrinkage is that, in 3*3 image blocks, if the currently processed pixel f (I, j) is 0, then the adjacent pixel f (I, J + 1), F (I, J-1), F (I-1, J), F (I + 1, J) were set to 255.
⊙ 8. Contraction of neighbors
8. The principle of neighborhood shrinkage is that, in 3*3 image blocks, if the currently processed pixel f (I, j) is 0, then the adjacent pixel f (I, J + 1), F (I, J-1), F (I-1, J), F (I + 1, J), F (I-1, J-1 ), f (I + 1, J-1), F (I-1, J + 1), F (I + 1, J + 1) were set to 255.
⊙ 4 Expansion of the neighborhood
4. The principle of neighborhood expansion is that in 3*3 image blocks, if the currently processed pixel f (I, j) is 1, then the adjacent pixel f (I, J + 1), F (I, J-1), F (I-1, J), F (I + 1, J) are set to 1.
Expansion of neighboring areas
8. The principle of neighborhood expansion is that in 3*3 image blocks, if the currently processed pixel f (I, j) is 1, then the adjacent pixel f (I, J + 1), F (I, J-1), F (I-1, J), F (I + 1, J), F (I-1, J-1 ), f (I + 1, J-1), F (I-1, J + 1), F (I + 1, J + 1) are set to 1.
⊙ 8 neighboring clearing of isolated points
8. The principle of clearing isolated points in the neighborhood is that in 3*3 image blocks, if the processed pixel f (I, j) is 1, and its adjacent pixel f (I, J + 1), F (I, J-1), F (I-1, J), F (I + 1, J), F (I-1, J-1 ), f (I + 1, J-1), F (I-1, J + 1), F (I + 1, J + 1) are all 0, the current processing pixel f (I, j) is 0.
⊙ 4 neighboring clearing of isolated points
4. The principle of clearing isolated points in the neighborhood is that in the 3*3 image block, if the currently processed pixel f (I, j) is 1, and its adjacent pixel f (I, J + 1), F (I, J-1), F (I-1, J), F (I + 1, J are all 0, the current processing pixel f (I, j) 0.

This article from the csdn blog, reproduced please indicate the source: http://blog.csdn.net/xuerene2/archive/2008/12/15/3519194.aspx

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.