a method of definition evaluation without reference imagefrom:http://nkwavelet.blog.163.com/blog/static/227756038201461532247117
In the quality evaluation without reference image, the sharpness of image is an important index to measure the quality of image, it can be better compared with people's subjective feeling, and the image's sharpness is not high to show the blurred image. Aiming at the application of non-reference image quality evaluation, this paper discusses and analyzes several typical and representative definition algorithms, and provides the basis for selecting the definition algorithm in practical application.
(1) Brenner gradient functionThe Brenner gradient function is the simplest gradient evaluation function, which simply computes the square of the gray difference of the adjacent two pixels, which is defined as follows:
Where: F (x, y) represents the gray value of the image f corresponding to the pixel point (x, y), and D (f) Calculates the result of the image sharpness (same as below).
(2) Tenengrad gradient functionThe Tenengrad gradient function uses the Sobel operator to extract the horizontal and vertical gradient values respectively, and the image sharpness of the base and Tenengrad gradient functions is defined as follows: G (x, y) in the form of:
Where: T is a given edge detection threshold, and GX and Gy are convolution of sobel horizontal and vertical edge detection operators at pixel points (x, y), we recommend using the following Sobel operator template to detect edges:
(3) Laplacian gradient functionThe Laplacian gradient function is basically the same as the Tenengrad gradient function, and the Laplacian operator is substituted for the Sobel operator, which is defined as follows:
Therefore, the definition of the image star based on the Laplacian gradient function is as follows: where g (x, y) is the convolution of the Laplacian operator at the pixel point (x, y).
(4) SMD (grayscale variance) functionWhen fully focused, the image is clearest, and the high frequency component in the image is the most, so the gray-scale change can be used as the basis of the focus evaluation, the formula of the gray variance method is as follows:
(5) SMD2 (gray variance product) functionGray-scale difference evaluation function has good computational performance, but its disadvantage is also obvious, that is, the sensitivity is not high near the focal point, that is, the function is too flat near the extreme points, which leads to the difficulty of improving the focusing precision. In the article "a fast and high sensitivity focus evaluation function", a new evaluation function is proposed, called the gray variance product method, that is, each pixel domain two gray scale difference multiplied by pixel, the function is defined as follows:
(6) Variance functionBecause the clear-focused image has a larger gray-scale difference than the blurred image, the variance function can be used as the evaluation function: The average gray value of the whole image, the function is more sensitive to noise, the more pure the picture, the smaller the value of the letter.
(7) Energy gradient functionThe energy gradient function is more suitable for real-time evaluation of image sharpness, the function is defined as follows:
(8) Vollath functionThe Vollath function is defined as follows:
Where: For the average gray value of the whole image, M and n are image width and height respectively.
(9) Entropy functionThe entropy function based on statistical features is an important index to measure the richness of image information, and it is known from the information theory that an image f is measured by entropy D (f) of the image:
Where: Pi is the probability of a pixel with a grayscale value of I in the image, and L is the total number of gray levels (usually 256). According to Shannon information theory, the maximum entropy is the most information. Applying this principle to the focus process, the larger the D (f), the clearer the image. Entropy function sensitivity is not high, depending on the content of the image is easy to appear contrary to the real situation results.
(Ten) EAV point sharpness algorithm functionGuo, Zhang Xia and so on, an algorithm based on edge sharpness is proposed to evaluate the sharpness of images. The gray-scale change of the edge direction of the statistical image is evaluated. The calculation formula is as follows:
Where: Df/dx is the gray change rate of the marginal normal direction, F (b)-F (a) is the overall gray-level change in the orientation. The algorithm only for the specific edge of the image statistics, can represent the clarity of the whole image is still questionable, in addition to manually select the Edge area before the calculation, inconvenient to achieve the automation of program operation, because Wang Hongnan, etc. in the study of image clarity evaluation method, the above algorithm was improved, the following improvements: a) will be targeted at the edge of the ladder The degree calculation is changed to the calculation of pixel domain gradient, so that the algorithm can evaluate the whole image and automate the algorithm. b) A distance weighting of gray changes in the square pixel 8 area, with a weight of 1 in the horizontal and vertical directions, and a weight of 45 and 135 degrees. c) The calculation results are normalized by the size of the image, so that the image can be compared. After the improvement of the above three steps, the point sharpness algorithm is:
Where: M and n are the number of rows and columns of the image.
(one) Reblur two times Blur
If an image is blurred, the high-frequency component does not change much when it is blurred, but if the original is clear, the high-frequency component changes very much. Therefore, the image can be treated by a Gaussian blur, the degraded image is obtained, and then the change of the neighboring pixel value of the original image and degraded image is compared, and the sharpness value is determined according to the size of the change, the smaller the result is, the clearer the image is and the more blurred the inverse. This idea can be called based on two-time fuzzy definition algorithm, its algorithm simplifies the flow as follows:
(NRSS) The similarity of gradient structure
Wang ET the use of Human vision system (HVS) is very suitable to extract the characteristics of the structure of the target, the concept of image structure similarity (SSIM) is proposed, so long as it can calculate the change of the target structure information, can get the perceptual image distortion value. Yang Chunling and so on based on this idea, the method is introduced into the evaluation of the definition of the full reference image, and the definition of image can be expressed by the similarity between the target image and the reference image, and the structure similarity between the images contains the following three parts comparison:
C1, C2, and C3 are constants that are set to avoid a denominator of 0. The structure similarity of an image is calculated from the following formula:
For the sake of simplicity, you can make
Shi and so on further improved the method of Yang Chunling and so on, according to the related thought of structure similarity and the related characteristics of the system of the inhabited vision, design the evaluation Index (NRSS) without reference image definition, the calculation method is as follows:
(a) construct a reference image for the image to be evaluated. Define the image to be evaluated as I, and the reference image, that is, treat the evaluation image I for low-pass filtering to obtain a reference image. The experiments show that the mean filter based on the disk model and the smoothing filter of Gaussian model can achieve good results, and in order to better match the imaging system, we propose to adopt a Gaussian smoothing filter with 7x7 size. In engineering applications where real-time processing is required, the 7x7 mean filter is not a significant drop in the evaluation effect.
(b) Extracting gradient information from image I and. Using the Sobel operator to extract the horizontal and vertical edge information from the human eye, which is the most sensitive to the horizontal and vertical edge information, the gradient image of I is defined as G and.
(c) Find the most abundant N image blocks of gradient information in gradient image G. The image G is divided into 8x8 small block, the step between the blocks is 4, that is, the adjacent block has 50% overlap, this is to avoid losing important edges. To calculate the variance of each block, the greater the variance, the richer the gradient information, the most variance of the n block is found, and the corresponding block is defined as. The value of n has a direct impact on the evaluation result, and also affects the algorithm running time. In the following experiment, set N = 64.
(d) Calculation of structural clarity Nrss. The structure similarity of each Xi and Yi is calculated first Ssim (xi, Yi), where the Ssim calculation method is described in the previous definition, the image without reference structure definition is:
() FFT image transform domainTo write.
(14)no-reference Perceptual quality assessment of JPEG compressed ImagesIn this article, the author of "Zhou Wang" and so on for JPEG compression picture proposed a new method of quality evaluation of non-parametric image.
JPEG image is a coding technique based on 8x8 block DCT transform, it is lossy because quantization error is generated when quantization of DCT coefficients. Quantization can lead to blurring and block effects. The ambiguity is mainly due to the loss of high-frequency DCT coefficients. The block effect is due to the discontinuity of the block boundary because the quantization of each chunk is independent. We use f (x, Y) to represent an image, the image size is MxN, and the signal difference across each horizontal line is calculated: First, the block effect is defined as the average of the signal difference between the left and right cross-boundary:
Then calculate the mean of the signal difference in the block: Then calculate the zero-crossing (ZC) rate, ZC is the boundary across zero meaning, that is, the value of the adjacent two points of the product is a negative, that is, a negative, so for [1, N-2] in the range of Y, define the following variables:
So the level of ZC is defined as follows:
Similarly, we can calculate the values of several indicators in the vertical direction. Finally, the average of the horizontal and vertical directions of these indicators is obtained:
There are many ways to link these indicators together to form a quality evaluation model. Here we use the following image quality definitions:
These are the model parameters extracted from a large number of experiments. The values of the parameters used in this article are as follows:
(no-reference) Image quality assessment forjpeg/jpeg2000 Coding
The author of this article has redefined the new quality indicators based on the previous article:
In fact, S is in (14) has been obtained in the quality evaluation value.
(