Article from the personal Baidu space---2012.2.23
One. Point arithmetic (pixel transform)
1. Linear transformation
Principle: f=a*f+b; Effect: A typical anti-color effect.
2. Valve value operation
Principle: If f<x, f=0, if F>x f=255; Effect: The feeling of a sketch
3. Window operations
Principle: Similar to the operation of the threshold, but in three paragraphs, the middle of the same period; effect: can remove background
4. Grayscale stretching
Principle: If x<x1 is f=y1/x1, if x1<x<x2, then f= (y2-y1)/(X2-X1) * (x-x1) +y1, x>x2 f= (255-y2)/(255-X2) * (x-x2) +y2;
Effect: stretches x1---x2 pixels to y1---y2 segments, such as darker images, to stretch the image to a lighter place ...
5. Grayscale Equalization
Principle: The Gray-level pixel points are then divided by the area of the image, then normalized multiplied by 255
Effect: Averaging the histogram, so that the image contrast is greatly improved
Two. Geometric transformations (coordinate transformations)
1. Panning the image
2. Image mirroring
3. Transpose the image
4. Rotation of the image
Principle: In three steps, the first step is to move the origin to the center of rotation (the origin in the upper left corner of the image), the second step of rotation y=r*sin (A-B), X=r*cos (A-B), the third step to move the center point back to the new origin (the new image of the upper left corner)
Three times matrix product
X0=x1*cos (a) +y1*sin (a)-c*cos (a)-d*sin (a) +a
Y0=-x1*sin (a) +y1*cos (a) +c*sin (a)-d*cos (a) +b
(b) Original center point, (c,d) New center point
5, Image Zoom
X0=X1/FX;
-----
6. Interpolation method
(a) The nearest interpolation is +0.5.
(ii) Two-line interpolation method (template: )
F (x,0) =f (0,0) +x* (f (1,0)-F (0,0))
F (x,1) =f (0,1) +x* (f ()-f (0,1))
f (x, y) =f (x,0) +y* (f (x,1)-F (x,0))
Here is an image of the geometrical figure can be seen, inconvenient to draw.
For a destination pixel, set the coordinates by the inverse transformation to obtain the floating point coordinates (I+U,J+V), where I, j are non-negative integers, u, v is the [0,1] interval floating point number, then this pixel value f (i+u,j+v) can be from the original image coordinates (I,J), (I+1,j), (i,j+ 1), (i+1,j+1) corresponds to the value of the surrounding four pixels, namely:
F (i+u,j+v) = (1-u) (1-v) F (i,j) + (1-u) VF (i,j+1) + u (1-v) f (i+1,j) + UVF (i+1,j+1)
where F (i,j) represents the pixel value at the source image (I,j), and so on
This is the bilinear interpolation method. The bilinear interpolation method is computationally large, but the image quality is high after zooming, and there is no case of discontinuous pixel values. Because bilinear interpolation has the properties of a low-pass filter, which causes damage to the high-frequency component, it may blur the image profile to some extent
Three convolution method
Interpolation algorithms, such as nearest neighbor interpolation (near proximity sampling), bilinear interpolation, and three convolution, are applicable to rotational transformations, tangent transformations, general linear transformations, and nonlinear transformations.
Three. Image enhancement (grayscale correction)
1. Smoothing the image
(i). Template operations, the current point of the pixel approximation is equal to the number of pixels around the operation after the addition (right, the bottom of the corresponding reduction of the width of the template, the height of each minus one, where the reservation is equal) The common template has a Gaussian template (1,2,1,2,4,2,1,2,1) * (1/16), ( 0,1,0,1,1,1,0,1,0) * (1/4)
(b). Median filtering, using the middle value of the template as the gray level of the current point; it does not affect the step function, the ramp function, it can effectively eliminate the single pulse, double pulse, so that the triangle function of the top smooth.
2. Sharpening of images
(i). Gradient Sharpening
Gradient in higher mathematics: direction vectors for the derivation of x, Y, respectively
The digital image here is to use the modulus of the direction vector. Here simplification equals |f (x, Y)-f (x+1,y+1) |+|f (X+1,y)-f (x,y+1) |
Obviously this value is small in the place where the image change is weak, and the image changes obviously big place, achieves the sharpening effect, the algorithm can also be used in the edge detection
Countdown: Near 2 value subtraction, divided by x change value Vf (x, y) = (f (x+1,y)-f (x, Y))/1
Integral: The For loop adds it.
(b). Laplace sharpening (essentially a template operation)
Second-order addition of x, y respectively
Second-order guide: recursion of first ...
(iii). High-pass filter (ideal high-pass filter, Bad Wolf high-pass filter, exponential high-pass filter, trapezoidal high-pass filter)
Principle: All are the values after the transformation to the frequency domain, after the various operations.
3. Pseudo-Color coding
Principle: Color the grayscale image. The system will give you a number of pseudo-color coding tables to choose from
Process: 1. Using color coded tables to modify the palette, 2. Re-set the image palette
Four. Image corrosion, expansion, and refinement
1. Corrosion of images
Principle: {x| s[x]<=x}<= here is the meaning of the inclusion
2. Expansion of the image
Principle: {s[x]| S[X] intersection x not equal to null}
3. Open operation, closed operation
(open) First corrosion after expansion: remove convex angle;
(closed) First expansion after corrosion: fill the concave angle;
4. Hit/Click to change
Two templates P1,p2
The P1 is shifted in and is included, and the P2 is translated and the image is not intersected.
5. Refine (fire burns from both sides)
1. The edge is continuous
2. The edge line moves at the same speed
Five. Image edge detection and extraction and contour tracking
1. Edge Detection
Concept: Primitive, edge, step edge, roof edge,----
Principle: Convolution more
2.Hough transformations
Transform lines, circles, etc. into points and use mathematical formulas
3. Contour Extraction
Two-value contour extraction method: With a point as the center, around eight points have color, then the point is placed 255;
4. Contour Tracking
Follow a certain route of exploration.
5. Seed Filling
Boundary filling algorithm, scanning line seed filling algorithm
Six. Image analysis
1. Segmentation of images
Threshold method, regional law (Merger law).
2. Projection method
Horizontal projection, vertical projection. (generally only useful for two-value images, the number of pixels per line on the horizontal is not zero----)
3. Differential Shadow Method
Image addition: Image overlay
Image subtraction: Can detect the motion of the same scene object (subtraction method)
Image multiplication: You can multiply the image by the mask graph to cover the effect of a part of the image
Image division: can produce a ratio graph based on color and spectrum
4. Image matching
Template matching method: The correlation between the image part and the template
Amplitude ordering matching method: see if the value after subtracting the pixel is greater than a value
Hierarchical search Sorting method: The image from coarsening to refinement of the one by one contrast (first determine the approximate position, after gradual refinement. )。
Seven. Restoration of images
Image restoration: After determining the noise model, the reverse operation is performed.
Image enhancement: No noise model is identified.
Image restoration is divided into linear and nonlinear algebraic methods
Image restoration needs to know the point extension function (H function) and the noise model
Basic functions: G (U,V) =nmh (u,v) *f (u,v) +n (U,V); G noise image, F original image, H (point extension function), N (Noise)
1. Inverse Filter method
Principle: Sg=f*h (all to be multiplied by Fourier transform, then inverse Fourier transform)
Effect: high noise ratio, good effect, but no constraint on restored image
2. Least squares class-constrained resiliency
To constrain a restored image, | | qf| | As a constraint factor (minimum effect is best)
(i) Wiener filtering method
Principle: A function similar to that used by the inverse filter method.
Effect: satisfying the smooth, linear random noise model effect.
(ii) constrained minimum square wave
Principle: Ibid.
Effect: Unknown
3. Non-linear recovery mode
All of these are based on a linear approach.
Maximum posterior restoration: Estimation of the source image
Maximum entropy recovery: Based on image entropy and noise entropy. Effect: Makes the restored image smoother.
Projection restoration: Decomposition of N-multi-function recalculation.
Monte Carlo Restoration Method: The image is divided into many cells, gray scale for the particles with energy, gradually put the particles in the cell (assuming the first place in the X, and then form a preliminary image, the back of the particles are placed in xx position, to see whether to meet the conditions based on the initial image), the calculation speed, In the noise small effect very good.
Geometric correction: A functional relationship is established based on the position of the pixels in the original image and the current diagram. (There is no noise here, only changes in displacement)
Blind image restoration: In the absence of prior knowledge of image degradation, the direct measurement method (through some features of the image to understand degradation), indirect measurement method (must not have noise, can be used).
4. Point expansion function (PSF) (spatial description of degenerate functions)
The PSF under uniform linear motion blur
PSF under the focus system
If the PSF is not previously known, it is necessary to obtain (through the test target, image recognition system, spectral determination of degraded image, etc.---)
5. Image Noise System
External noise (weather). Internal noise (electric, light)
Smooth, non-stationary (whether its statistical characteristics change over time)
Additive noise, multiplicative noise
Nine. Image compression coding (compression is the removal of redundancy in the information, that is, the retention of uncertain information, to remove the identified information (inferred))
Generally divided into: lossless compression and lossy compression
There are many methods of compression coding, mainly divided into the following four categories: (1) pixel coding, (2) predictive coding, (3) Transformation coding; (4) Other methods.
The so-called pixel coding means that each pixel is processed separately during encoding, regardless of the correlation between pixels. Several methods commonly used in pixel coding are: (1) Pulse code modulation (pulses code modulation, PCM), (2) Entropy coding (Entropy Coding), (3) stroke encoding (Run Length Coding), (4) Bit plane coding ( Bit Plane Coding). Where we want to introduce is the entropy coding in Huffman (Huffman) encoding and travel coding (to read. PCX file for example).
The so-called predictive coding refers to the removal of the correlation and redundancy between neighboring pixels, and only the new information is encoded. To give a simple example, because the grayscale of a pixel is continuous, the difference in gray values between neighboring pixels may be small in a region. If we only record the first pixel of the grayscale, the other pixel grayscale is used with the difference between the previous pixel grayscale, it can play the purpose of compression. As 248,2,1,0,1,3, the 6-pixel grayscale is actually 248,250,251,251,252,255. This means that 250 requires 8 bits, whereas 2 requires two bits, which means that compression is achieved.
The common predictive coding is δ modulation (delta modulation, or DM), and differential predictive coding (differential Pulse code MODULATION,DPCM), details are not detailed here.