3. Common video processing algorithms: 3.1 Image Scaling
The so-called image scaling is to create a new pixel location and assign a gray value to these new locations. For example, if you want to enlarge an image of 1.5 x pixels, that is, X pixels, one of the easiest ways to visualize and zoom in is to place a fictional x grid on the original image. Obviously, the interval between the raster is less than one pixel, because we fit it on a smaller image. To assign a gray value to any electricity on the cover layer, we look for the nearest pixel on the original image and pay its gray value to the new pixel on the raster. After a value is assigned to all vertices that overwrite the grid, the enlarged image is obtained. This grayscale value assignment method is called the nearest neighbor amplification method. Another method is to use bilinear interpolation of the four nearest points to assign values to the gray scale. Of course there are many other interpolation methods. However, the most common method is the nearest neighbor method and bilinear interpolation method.
The bilinear interpolation algorithm is given below:
(1) calculate the ratio of source image to target image width to height
W0: indicates the source image width.
H0: indicates the height of the source image.
W1: indicates the width of the target image.
H1: height of the Target Image
Floatfw = float (w0-1)/(w1-1 );
Floatfh = float (h0-1)/(h1-1 );
(2) For a point (x, y) of the target image, the corresponding coordinates in the source image are calculated and the result is a floating point number.
Floatx0 = x * Fw;
Floaty0 = y * FH;
Intx1 = int (x0 );
Intx2 = X1 + 1;
Inty1 = int (y0 );
Inty2 = Y1 + 1;
The four coordinate points in the source image are (x1, Y1) (x1, Y2) (X2, Y1) (X2, Y2)
(3) Calculate the weight ratio of the four surrounding points
For example,
Fx1 = x0-x1;
Fx2 = 1.0f-fx1;
Fy1 = y0-Y1;
FY2 = 1.0f-fy1;
Float S1 = fx1 * fy1;
Float S2 = fx2 * fy1;
Float S3 = fx2 * FY2;
Float S4 = fx1 * FY2;
We use value (coordinate) to obtain the coordinate value of this point, then:
Value (x0, y0) = value (X2, Y2) * S1 + value (x1, Y2) * S2 + value (x1, Y1) * S3 + value (X2, Y1) * S4;
If you are not clear about the above operations, you can do so.
We must first obtain the pixel values (x0, Y1) and (x0, Y2.
Floatvalue (x0, Y1) = value (x1, Y1) * fx2 + value (X2, Y1) * fx1;
Floatvalue (x0, Y2) = value (x1, Y2) * fx2 + value (X2, Y2) * fx1;
Note: The closer it is to a certain point, the greater the distance from the weight, so the difference between it and 1 is obtained.
Floatvalue (x0, y0) = value (x0, Y1) * FY2 + value (x0, Y2) * fy1;
The nearest neighbor method needs to calculate the distance between two pixels. The distance between two pixels (x, y) and (S, T) can be expressed in multiple ways:
3.2 video noise reduction algorithm 3.2.1 airspace Filtering
In general, on the m x n image F, linear filtering with a filter mask of m x n is given by the following formula:
Here, A = (m-1)/2, B = (n-1)/2.
X =,..., M-1; y =,..., N-1
G is the image obtained by filtering.
Common templates:
Airspace filtering may cause image blur. The improved method is to change the template weighting coefficient for information such as the gray difference and pixel distance. Such as bilateral filtering algorithms.
Median Filter: sorts the values of all pixels in the region covered by the template and outputs the values in the middle as filters.
3.2.2 time domain filtering
K is related to | S (X, Y, t)-g (x, y, T-TP) |. The larger the movement, the more k tends to 1 and the weakest filtering. Static Image, K = 0, the most filtered. Eliminate motion shadows through detection.
3.2.3 Space-Time Filtering
Airspace filtering only depends on a single frame. Although each frame looks clean, it does not look very calm between frames. It seems that the noise is large and the image is blurred, subjective Perception is not good. For videos, airspace filtering is not good. The time-domain filtering effect is good. However, for motion scenarios, time-domain filtering may cause ghosting.
For moving objects, the resolution of human eyes is extremely low. Therefore, it is suitable for using airspace filtering. For static parts, time-domain filtering will maintain the definition of the original image, and it will not have any ghosting effect on the noise filtering. Therefore, we can use motion detection and adaptive selection of airspace filtering and time domain filtering algorithms.
The following uses the time-space filter of dm8168 as an example:
Temporal_strength: time domain filtering intensity. If M is greater than temporal strength, time domain filtering is not performed, and only airspace filtering is performed. A0 indicates
Noise is estimated to reduce the intensity of airspace filtering when the noise is relatively small. When the noise is relatively large, the intensity of airspace noise is stronger, in this way, the image blur caused by spatial filtering is avoided. The noise estimation method is measured by the mean value of the absolute pixel difference between the current input image and the IP address of the previous output frame. The larger the mean value, the greater the noise. Otherwise, the lower the noise.
3.3 overview of de-interlace algorithm 3.3.1
We have discussed how to display the video on a row-by-row display. This section discusses how to solve this problem. This is the de-barrier algorithm. In fact, we can see this algorithm almost every day, because the current TV sets need to solve the de-barrier problem. Most of our TV videos are those scanned at the same time, while our TV sets have evolved from a TV set with the same line of scan to a TV set with the same line of scan. Therefore, there is a problem of video disconnection.
Through the following several images, we can understand how to split rows.
3.3.2 anti-spam Algorithm for airspace
(1) Row Replication
Note:
- Is the airspace location
- Is the output "frame", is the input "field", soThe output frame rate should be equal to the field frequency..
- The pixel content remains unchanged in the even line of the even field and in the same line of the odd field.
- The number of rows is twice the number of rows in a field, and the horizontal resolution is the same.
(2) linear filtering
3.3.3 Space-Time Domain de-interlace Algorithm
(1) Inter-field row mean
(2) Vertical Time Domain median filter:
Algorithms are diverse and balanced in terms of de-interlace effect, image clarity, algorithm complexity, and hardware complexity.
3.4 video enhancement algorithm
The video enhancement algorithm we often use is space domain enhancement. This enhancement algorithm operates directly on the pixels that constitute the image to enhance the image.
General:
G (x, y) = T [F (x, y)]
G (x, y) is the processed image, f (x, y) is the input image, and T is an operation on F, it is defined in the (x, y) neighborhood. The neighbor of a vertex (X, Y) is a rectangle or square sub-pixel centered on (x, y.
If t is only used to operate a single pixel at (x, y), it is called grayscale transformation. If t is an operation on a (x, y) neighbor area, such as 3x3, 5x5, it is called template filtering.
3.4.1 basic grayscale Transformation
(1) gray scale inversion (black hot white thermal polarity inversion)
That is, our black and white hot images in the thermal imaging system. For example, the white hot image on the left is reversed and transformed into a black hot image:
S = L-r here R is the gray value of the original pixel; s is the gray value after transformation; L is the maximum gray value.
Obviously, the details of the black hot image on the right are clearer.
(2) Gamma Correction
S = C and is a positive constant.
Obviously, we can see from the above that <1, the image with low brightness can be stretched to see the details of the dim part. And> 1, the image with high brightness can be stretched to see the details of the bright part.
(3) piecewise linear transformation Functions
It can be used to stretch the contrast of a gray range.
3.4.2 histogram balancing
(1) histogram Concept
If the gray level [0, L-1] of the image, then the number of pixels of each gray level of the image, we call it the histogram of the image. The discrete function is used to represent H (the K-level gray level here, which is the number of pixels in the image. We often use the total number of pixels in the image (represented by N) to remove each of them to the normalized histogram. Therefore, a normalized histogram is given by P (Here K = 0, 1... L-1. Put simply, P (returns the probability of gray-level occurrence. Note that the sum of all parts of a normalized histogram is equal to 1.
From the histogram, we can see that the quality of the image is good or bad, obviously, the histogram distribution is relatively uniform, the image has a relatively contrast and changing gray tone. In turn, if the histogram of an image with uneven histograms is relatively uniform, the image quality may be improved. This is the histogram balanced image enhancement algorithm.
(2) histogram balancing
If the probability of gray-level appearance in an image is similar
N indicates the total number of pixels in the image, which is the number of pixels in the gray level, and l indicates the total number of pixels in the image, which proves the transformation.
The transformed image histogram is balanced, or the histogram is linear.
3.4.3 retinix Algorithm
Generally, we can regard an image as the product of incident light (L) and reflected light (R), and l as an interference that affects image quality (such as haze and dim light ), R can be regarded as a clear image without interference, that is, the obtained image can be regarded as the product of clean images and noise.
S (x, y) = R (x, y) * l (x, y) (1)
S is the image affected by noise, r is the image to be cleaned, and L is the noise.
Obtain the logarithm of (1) on both sides:
R (x, y) = S (x, y) + L (x, y)
S (x, y) = Log (S (x, y); R (x, y) = Log (R (x, y); L (x, y) = Log (L (x, y ))
Because S (X, Y) is known, the key is how to calculate L (x, y ). Generally, we perform a strong filter on S (x, y) to obtain the approximate L (x, y), so as to solve the division of R (x, y ).