Python OpenCV learning notes histogram equalization, pythonopencv
This article introduces the histogram equalization of python OpenCV learning notes. The details are as follows:
Documents-https://docs.opencv.org/3.4.0/d5/daf/tutorial_py_histogram_equalization.html
Considering an image, its pixel value is limited to a specific value range. For example, a brighter image limits all pixels to a higher value. However, a good image will have pixels from all areas of the image. So you need to stretch the histogram to both ends (as shown in the preceding figure), which is the function of histogram balancing (in simple words ). This usually improves the image contrast.
Read Histogram Equalization on the wikipedia page of Histogram balancing to learn more about it. It provides a good explanation and some examples, so that you can understand everything after reading it. Similarly, we will see its Numpy implementation. Then, we will see the OpenCV function.
import numpy as npimport cv2 as cvfrom matplotlib import pyplot as pltimg = cv.imread('wiki.jpg', 0)hist, bins = np.histogram(img.flatten(), 256, [0,256])cdf = hist.cumsum()cdf_normalized = cdf*float(hist.max())/cdf.max()plt.plot(cdf_normalized, color = 'b')plt.hist(img.flatten(),256,[0,256], color = 'r')plt.xlim([0,256])plt.legend(('cdf','histogram'), loc = 'upper left')plt.show()
As you can see, the histogram is located in a brighter area. We need a complete spectrum. Therefore, we need a conversion function that maps the input pixels in the brighter area to the output pixels in the whole area. This is what histogram balancing does.
Now we have found the smallest histogram value (excluding 0) and applied the histogram equilibrium equation given on the wiki page. But I used it in the conceptual ARRAY OF THE Numpy mask array. For the mask array, all operations are performed on non-mask elements.
cdf_m = np.ma.masked_equal(cdf, 0)cdf_m = (cdf_m-cdf_m.min()) * 255 / (cdf_m.max()-cdf_m.min())cdf = np.ma.filled(cdf_m, 0).astype('uint8')
Now we have a search table that provides information about the output pixel values of each input pixel value. So we only need to apply the transformation.
img2 = cdf[img]
Now we calculate its histogram and cdf, just as before. The result is as follows:
Another important feature is that even if the image is a darker image (rather than a brighter image we use), after balancing, we will get almost the same image. Therefore, it is used as a "reference tool" to make all images share the same illumination conditions. This is useful in many cases. For example, in face recognition, before training the face data, the face images are even so that they have the same illumination conditions.
Histogram Equalization in OpenCV
OpenCV has a function to do this,cv.equalizeHist()
. Its input is only a gray image, and the output is our histogram balanced image.
Img = cv. imread ('wiki, jpg ', 0) equ = cv. equalizeHist (img) res = np. hstack (img, equ) # overlapping image cv.imwrite('res.png', res)
So now you can use different light conditions to take different images, balance them, and check the results.
When the histogram of an image is restricted to a specific area, histogram balancing is good. In those areas with great intensity changes, the histogram covers a large area, such as bright and dark pixels, which is not easy to use.
CLAHE (comparison of Limited Adaptive Histogram balancing/Contrast Limited Adaptive Histogram Equalization)
We just saw the first histogram equalization, taking into account the global comparison of the image. In many cases, this is not a good idea. For example, an input image and its results after global histogram balancing are displayed.
After histogram equalization, the background comparison is improved. But compare the face of the statue in the two images. Because of the high brightness, we lost most of the information. This is because its histogram is not limited to a specific area, as we have seen in the previous example.
To solve this problem, we can use adaptive histogram balancing. At this point, the image is divided into several small pieces, called "tiles" (In OpenCV, the default value is 8x8 ). Then every square is a histogram as usual. Therefore, the histogram is limited to a small area (unless there is noise ). If the noise is there, it will be amplified. In order to avoid this situation, comparison restrictions are applied. If any histogram bin exceeds the specified contrast limit (40 by default), These pixels are cropped and evenly distributed to other bins before applying histogram balancing. After balancing, remove the workpiece in the boundary and use bilinear interpolation.
cv.createCLAHE([, clipLimit[, tileGridSize]])
import numpy as npimport cv2 as cvimg = cv.imread('tsukuba_1.png', 0)# create a CLAHE object (Arguments are optional).clahe = cv.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))cl1 = clahe.apply(img)cv.imread('clahe_2.jpg', cl1)
The above is all the content of this article. I hope it will be helpful for your learning and support for helping customers.