[Opencv-python] Image processing part IV (ii) in OpenCV

Source: Internet
Author: User

Part IV
Image processing in the OpenCV

16 Image Smoothing


Goal
? Learn to blur images using different low-pass filters
? Convolution (2D convolution) of an image using a custom filter


The volume of the product
As with signals, we can also implement low-pass filtering (LPF), high-pass filtering (HPF), and so on for 2D images. LPF helps us to remove noise and blur images. HPF help us find the edge of the image
The function cv.filter2d () provided by OpenCV allows us to perform convolution operations on an image. Below we will use an average filter for an image. The following is a 5x5 average filter core:
      
The operation is as follows: The nucleus is placed on a pixel a of the image, the number of pixels corresponding to the nucleus of the image (5x5) and the average, using this mean to replace the value of pixel a. Repeat until you have updated each pixel value of the image to one side. Code below, run it.

ImportCv2ImportNumPy as NP fromMatplotlibImportPyplot as Pltimg= Cv2.imread ('Opencv_logo.png') Kernel= Np.ones ((5,5), Np.float32)/25DST= Cv2.filter2d (img,-1, Kernel) plt.subplot (121), Plt.imshow (IMG), Plt.title ('Original') Plt.xticks ([]), Plt.yticks ([]) Plt.subplot (122), Plt.imshow (DST), Plt.title ('averaging') Plt.xticks ([]), Plt.yticks ([]) plt.show ()

Results:

Image blur (Image smoothing)
The use of low-pass filter can achieve the purpose of blurred image. This is very helpful for noise removal. In fact, the removal of high-frequency components in the image (e.g. noise, boundary). So the border will be blurred a bit. (Of course, there are some fuzzy techniques that don't blur the boundaries.) The OpenCV provides four kinds of fuzzy technologies.


16.1 average
This is done by a normalized convolution box. He just uses the convolution box to cover the average of all the pixels in the area instead of the central element. You can use the functions Cv2.blur () and Cv2.boxfilter () to complete this task. You can read more about the details of the convolution box with the view document. We need to set the width and height of the convolution box. Here is a 3x3 normalized convolution box:
      
Note: If you do not want to use the normalized convolution box, you should use Cv2.boxfilter (), when you want to pass in the parameter normalize=false.
Here is an example of the same as the first part:

ImportCv2ImportNumPy as NP fromMatplotlibImportPyplot as Pltimg= Cv2.imread ('Opencv_logo.png') Blur= Cv2.blur (IMG, (5,5)) Plt.subplot (121), Plt.imshow (IMG), Plt.title ('Original') Plt.xticks ([]), Plt.yticks ([]) Plt.subplot (122), Plt.imshow (Blur), Plt.title ('Blurred') Plt.xticks ([]), Plt.yticks ([]) plt.show ()

Results:

16.2 Gaussian Blur
Now the convolution core into a Gaussian nucleus (in short, the box is unchanged, the original value of each box is equal, the value is now in accordance with the Gaussian distribution, the center of the value of the box is the largest, the rest of the box according to the distance from the center element decreases, forming a Gaussian small hill. The original averaging now becomes the weighted average, all of which is the value in the box. The function implemented is CV2. Gaussianblur (). We need to specify the width and height of the Gaussian kernel (which must be an odd number). and the standard deviation of the Gaussian function along the x, y direction. If we specify only the standard deviation in the X direction, the Y direction will also take the same value. If the two standard deviations are all 0, then the function will calculate itself according to the size of the kernel function. Gaussian filtering can effectively remove Gaussian noise from the image.
If you want, you can also use the function Cv2.getgaussiankernel () to build a Gaussian kernel yourself.
If you want to use Gaussian blur, the above code should be written as:

# 0 refers to the calculation of the Gaussian function standard deviation blur = Cv2 According to the window size (5,5) . Gaussianblur (IMG, (5,5), 0)

Results:

16.3 Mid-value Blur
As the name implies, the value of the center pixel is replaced by the median of the pixel corresponding to the convolution box. This filter is often used to remove salt and pepper noise. The previous filter replaces the value of the center pixel with a new value computed, and the median filter replaces it with a value around the central pixel (which can also be used to make himself). He can effectively remove the noise. The size of the convolution kernel should also be an odd number.
In this example, we add 50% noise to the original image and then use the median value blur.
Code:

Median = Cv2.medianblur (img,5)

Results:


16.4 Bilateral filtering
The function Cv2.bilateralfilter () can effectively remove noise while keeping the boundary clear. However, this operation is slower compared to other filters. We already know that the Gaussian filter is the Gaussian weighted average of the pixels adjacent to the center point. This Gaussian filter takes into account only the spatial relationship between pixels, regardless of the relationship between pixel values (the similarity of pixels). So this method does not consider whether a pixel is at the boundary. So the borders don't blur, and that's not what we want. Both spatial Gaussian weights and gray value similarity Gaussian weights are used in bilateral filtering. The spatial Gaussian function ensures that only pixels in the neighboring region have an effect on the center point, and the grayscale similarity Gaussian function ensures that only the gray values of the center pixels are close to being used for fuzzy operations. So this method will ensure that the boundary is not blurred, because the gray values at the boundary change relatively large.
The code for bilateral filtering is as follows:

# cv2.bilateralfilter (src, D, Sigmacolor, Sigmaspace) # D–diameter of each pixel neighborhood this is used during filtering. # If It is non-positive, it's computed from Sigmaspace # 9 Neighborhood Diameter, two 75 are spatial Gaussian function standard deviation, gray value similarity Gaussian function standard deviation blur = Cv2.bilateralfilter (img,9,75,75)

Results:
See, the texture is blurred, but the border is still there.

17 Morphological transformations


Goal
? Learn about different morphological operations, such as corrosion, expansion, open operations, closed operations, etc.
? The functions we want to learn are: Cv2.erode (), Cv2.dilate (), Cv2.morphologyex ()
such as


Principle
Morphological manipulation is a simple operation based on the shape of the image. The operation of a two-valued image in general. You need to enter two parameters, one is the original image, the second is called a structured element or a core, and it is used to determine the nature of the operation. Two basic morphological operations are corrosion and swelling. Their variants consist of open operations, closed operations, gradients, and so on. We will consider each of them as examples.


17.1 corrosion
Just like soil erosion, this operation will corrode the boundary of the foreground object (but the foreground is still white). How did this happen? The convolution core slides along the image, and if all the pixel values of the original image corresponding to the convolution core are 1, the central element remains the original pixel value, otherwise it becomes zero.

What effect does it have? Depending on the size of the convolution core, all pixels near the foreground are eroded (to 0), so the foreground object becomes smaller and the white area of the image is reduced. This is useful for removing white noise, and can also be used to disconnect two objects that are connected to one another.
Here we have an example of using a 5x5 convolution kernel, where all the values are in. Let's see how he works:

Import Cv2 Import  = cv2.imread ('j.png'= Np.ones ((5,5= Cv2.erode (Img,kernel, iterations = 1)

Results:


17.2 expansion
In contrast to corrosion, as long as one of the pixel values of the original image corresponding to the convolution nucleus is 1, the pixel value of the center element is 1. So this operation will increase the white area (foreground) in the image. In general, the noise is used to expand the corrosion before use. Because corrosion removes white noise at the same time, it also makes the foreground object smaller. So we're going to expand on him. The noise has been removed and will not come back, but the outlook will increase. The expansion can also be used to connect two separate objects.

dilation = cv2.dilate (img,kernel,iterations = 1)

Results:

17.3 Open Operation
Advanced corrosion and then expansion is called the open operation. As we have described above, it is used to remove noise. The function we use here is Cv2.morphologyex ().
opening = Cv2.morphologyex (IMG, Cv2. Morph_open, Kernel)
Results:


17.4 closed operation
First expansion and then corrosion. It is often used to fill a small hole in a foreground object, or a small black dot on a foreground object.

closing = Cv2.morphologyex (IMG, Cv2. Morph_close, Kernel)

Results:


17.5 morphological gradients
is actually a picture of the difference between expansion and corrosion.
The result looks like the outline of the foreground object.

Gradient = Cv2.morphologyex (img, Cv2. Morph_gradient, Kernel)

Results:

17.6 Hats
The difference between the original image and the image obtained after the open operation. The following example is the result of a hat operation with a 9x9 core.

tophat = Cv2.morphologyex (img, Cv2. Morph_tophat, Kernel)

Results:


17.7 Black Hat
The difference between the image and the original image obtained after the closed operation.

tophat = Cv2.morphologyex (img, Cv2. Morph_tophat, Kernel)

Results:


17.8 relationship between morphological operations
We list the relationship between the above concentrated morphological operations for your reference:

Structured elements
In the previous example, we used Numpy to construct a structured element, which is square. But sometimes we need to build an oval/round nucleus. To achieve this requirement, the OPENCV function cv2.getstructuringelement () is provided. You just need to tell him the shape and size of the nucleus you need.

#Rectangular Kernel>>> cv2.getstructuringelement (Cv2. Morph_rect, (5,5) Array ([[1, 1, 1, 1, 1],[1, 1, 1, 1, 1],[1, 1, 1, 1, 1],[1, 1, 1, 1, 1],[1, 1, 1, 1, 1]], dtype=uint8)#Elliptical Kernel>>> cv2.getstructuringelement (Cv2. Morph_ellipse, (5,5)) Array ([[[0, 0,1, 0, 0],[1, 1, 1, 1, 1],[1, 1, 1, 1, 1],[1, 1, 1, 1, 1],[0, 0,1, 0, 0]], dtype=uint8)#cross-shaped Kernel>>> cv2.getstructuringelement (Cv2. Morph_cross, (5,5)) Array ([[[0, 0,1, 0, 0],[0, 0,1, 0, 0],[1, 1, 1, 1, 1],[0, 0,1, 0, 0],[0, 0,1, 0, 0]], dtype=uint8)

18 Image gradients


Goal
? Image gradients, image boundaries, etc.
? The functions used are: CV2. Sobel (), Cv2. Schar (), Cv2. Laplacian (), etc.

Principle
A gradient is simply a derivative.
The OpenCV offers three different gradient filters, or high-pass filters: Sobel,scharr and Laplacian. We will introduce them to each other.
Sobel,scharr is actually the first order or second derivative. ScHARR is the optimization of the Sobel (when solving a gradient angle using a small convolution kernel solution). Laplacian is to find the second derivative.

18.1 Sobel operator and scharr operator
The Sobel operator is a combination of Gaussian smoothing and differential manipulation, so it has a good anti-noise capability. You can set the direction of the derivation (Xorder or Yorder). You can also set the size of the convolutional cores used (ksize). If Ksize=-1, the 3x3 ScHARR filter is used, it will have a better effect than a 3x3 Sobel filter (and the same speed, so the ScHARR filter should be used when using a 3x3 filter). The 3x3 ScHARR filter convolution core is as follows:
   

18.2 Laplacian operator
The Laplace operator can be defined by the form of a second derivative, assuming that its discrete implementation is similar to the second-order Sobel derivative, in fact, OpenCV calls the Sobel operator directly when calculating the Laplace operator. The calculation formula is as follows:

Convolution cores used by Laplace filters:
      
Code
The following code operates on the same image using each of the three filters. Convolution cores used
It's all 5x5.

ImportCv2ImportNumPy as NP fromMatplotlibImportPyplot as Pltimg= Cv2.imread ('dave.jpg', 0) Laplacian=Cv2. Laplacian (Img,cv2. cv_64f) Sobelx= Cv2. Sobel (Img,cv2. Cv_64f,1,0,ksize=5) sobely= Cv2. Sobel (Img,cv2. Cv_64f,0,1,ksize=5) Plt.subplot (2,2,1), plt.imshow (Img,cmap ='Gray') Plt.title ('Original'), Plt.xticks ([]), Plt.yticks ([]) Plt.subplot (2,2,2), plt.imshow (Laplacian,cmap ='Gray') Plt.title ('Laplacian'), Plt.xticks ([]), Plt.yticks ([]) Plt.subplot (2,2,3), plt.imshow (Sobelx,cmap ='Gray') Plt.title ('Sobel X'), Plt.xticks ([]), Plt.yticks ([]) Plt.subplot (2,2,4), plt.imshow (Sobely,cmap ='Gray') Plt.title ('Sobel Y'), Plt.xticks ([]), Plt.yticks ([]) plt.show ()

Results:

An important thing!
When looking at the comments on the above example, I don't know if you noticed: when we can set the depth (data type) of the output image to be consistent with the original image by the parameter-one, we use CV2 in the code. cv_64f. What is this for? Imagine the derivative of a black-to-white boundary as an integer, whereas a white-to-black boundary point derivative is negative. If the depth of the original image is np.int8, all negative values will be truncated to 0, in other words, the boundary is dropped. So if you want to detect both of these boundaries, the best way is to set the output data type higher, such as CV2. Cv_16s,cv2. cv_64f and so on. Take the absolute value and then turn it back to Cv2. cv_8u. The following example shows the different effects caused by different depths of the output picture.

ImportCv2ImportNumPy as NP fromMatplotlibImportPyplot as Pltimg= Cv2.imread ('Box.png', 0)#Output dtype = Cv2. cv_8usobelx8u = Cv2. Sobel (Img,cv2. Cv_8u,1,0,ksize=5)#Output dtype = Cv2. cv_64f. Then take it absolute and convert to CV2. cv_8usobelx64f = Cv2. Sobel (Img,cv2. Cv_64f,1,0,ksize=5) abs_sobel64f=Np.absolute (sobelx64f) sobel_8u=np.uint8 (abs_sobel64f) Plt.subplot (1,3,1), plt.imshow (Img,cmap ='Gray') Plt.title ('Original'), Plt.xticks ([]), Plt.yticks ([]) Plt.subplot (1,3,2), plt.imshow (Sobelx8u,cmap ='Gray') Plt.title ('Sobel cv_8u'), Plt.xticks ([]), Plt.yticks ([]) Plt.subplot (1,3,3), plt.imshow (Sobel_8u,cmap ='Gray') Plt.title ('Sobel ABS (cv_64f)'), Plt.xticks ([]), Plt.yticks ([]) plt.show ()

Results:

[Opencv-python] Image processing part IV (ii) in OpenCV

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.