Edge detection is one of the most important concepts in computer vision. This is a very intuitive concept, running an image on an image detection should only output edges, similar to the sketch. My eyes.
Not only clearly explains how edge detection works, but also provides a new and easy way to significantly improve edge detection with minimal effort.
By acquiring these edges, many computer algorithms can be implemented because the edges in a scene contain most (at least a lot) of the information.
For example, we remember the background of the Green Hill and Blue sky of Windows XP.
When our brains try to understand the scene, we know it is grass and looks very uniform. Then we saw the sky with some white clouds floating around. Each of these objects is detached, and there is an edge between them. This is why most of the information in the scene exists and is in the edge.
This is why edge detection is an important concept in computer vision. By reducing the image to only the edges, it makes it easier to identify, learn, or manipulate the scene for many algorithms.
Edge detection: Filtering
The majority of edge detection is filter-based. In general, filtering is an action that is eliminated. For example: filtration of water is the elimination of parasites. Similarly, when we try to find the edge of the image, I'm trying to eliminate everything except the edge of the image.
It is difficult to remove portions of the image that are not useful edges, leaving the right edges. How do I know if this is useful for edge, for example, I run Canny edge dectector program on Windows XP background, the effect is as follows.
You can see the edges of the tiny blades of grass, which is annoying and does not really provide useful information. Even those clouds are not very clear. Now you can set some boundaries on most Canny edge dectector, which set thresholds (or non-maximum suppression), and all edges must meet this threshold to be divided into "important" edges. Rather than differentiating between the thresholds of Canny edge, we talk more broadly and build a few filters.
Edge Detection: Gaussian filter
Gaussian filters are one of the most fundamental filters for edge detection, although there are other, Gaussian filters that run through this article. The Gaussian filter, as its name, is a Gaussian-distributed filter.
It looks like a parabola (except in the case of two dimensions). With matrix multiplication, the Gaussian filter can be applied to every pixel. It achieves a blending effect, so that the most central pixel is as small as possible based on its neighboring pixels. For example, if you run an evenly distributed Gaussian filter on the image of my cat, I can get the following image:
5.jpg
You can see that the image becomes blurred, and the Gaussian filter is to get all the pixels, so that the pixel values are related to the neighboring pixels.
To make the Gaussian filter effective in edge detection, we can use Gaussian filters that are derivative from the X and Y aspects. This may sound counterintuitive or unknown, but the idea is clear if we see the image of this derivative Gaussian filter.
When you take the derivative of a Gaussian x and Y component, a large crest and trough appear. If you understand the derivative and think about it, you should be able to think of it quickly. Because of the great change of the value on the peak of Gaussian function, the appearance of the Gaussian derivative crest and trough is caused.
If we write the above code, it's pretty straightforward (at least for Matlab and Python):
% Takes the derivative of a 5x5 gaussian, with a sigma [hx, hy] = gradient(fspecial(‘gaussian‘,[5 5],sigma));
That's it, one line of code will get the Gaussian we want, and then we'll take the X and Y components.
Edge Detection: Applying filters
We've got two Gaussian filters, and we've applied them to the image. We also use a non-maximum limit, that is, if it is not a maximum value, set the pixel value to 0. To put it another way is to eliminate noise.
The code to apply the filter is as follows:
% Convert an image to double for increased precisionimg = double(img);% Find two derived gaussians with respect to x and y[hx, hy] = gradient(fspecial(‘gaussian‘,[5 5],sigma));% Run the filters over the image, generating a filtered image% Leaves x edgesgx = double(imfilter(img,hx,‘replicate‘, ‘conv‘));% Leaves y edgesgy = double(imfilter(img,hy,‘replicate‘, ‘conv‘));% Take the absolute value, and combine the x and y edgesmag = sqrt((gx .* gx) + (gy .* gy));% Use non-maxima suppression [mag, ] = max(mag, [], 3);
If we apply it to a picture of my cat, we'll get a picture of it:
Interestingly, we can also apply this method to RGB images, as well as get colored edges.
Normal Edge filter applied to the cat's RGB image
Both images should represent the difference between pixels and the color of the pixels adjacent to it, except that the color image has 3 layers and the black and white images are only 1 layers. (the layer, translation is not very good)
Edge Detection: Directional filters
Why limit ourselves to using only absolute X-and Y-directional filters? Let's also build some directional filters. This method (more or less) derives from Freeman and Adelson's "The design and use of steerable filters" paper, which allows us to place our Gaussian filters in different directions.
Essentially, we put our Gaussian filters in different directions to create different values based on the angle of the Gaussian-related edges. For example, if we place the Gaussian function at 45 degrees and apply it to an image at a 45-degree angle, we should be able to get a higher order of magnitude than the Gaussian function at 0 degrees.
In this case, I have generated several different directional filters:
Upstream x component, downstream Y component
Various Gaussian functions produce filters of 90, 45, 45, and 22.5 degrees relative to the X and Y components. This also creates a different edge size, although these filters should detect similar edges.
The code I use is almost identical to the code used by a single filter, but I've mixed them up differently. It looks a little messy, but I make it look clearer by letting me run each filter expression more clearly.
% Create four filters[hx, hy] = gradient (fspecial (' Gaussian ', [5 5],sigma)]; [Hx1, Hy1] = AltOrientFilter1 (HX, HY); [HX2, Hy2] = AltOrientFilter2 (HX, HY); [Hx3, Hy3] = AltOrientFilter3 (HX, HY);% Run first Gaussian filter on imagegx = Double (IMFilter (IMG,HX, ' Replicate ', ' conv ') ); gy = double (IMFilter (Img,hy, ' Replicate ', ' conv '));% Run second Gaussian filter on imagegx1 = Double (IMFilter (img,hx1, ' r Eplicate ', ' conv ')); gy1 = Double (IMFilter (img,hy1, ' Replicate ', ' conv '));% Run Third Gaussian filter on imagegx2 = Double (i Mfilter (IMG,HX2, ' Replicate ', ' conv ')); gy2 = Double (IMFilter (Img,hy2, ' Replicate ', ' conv '));% Run Fourth Gaussian filter On imagegx3 = Double (IMFilter (img,hx3, ' Replicate ', ' conv ')); gy3 = Double (IMFilter (Img,hy3, ' Replicate ', ' conv ')); Merge All Filterssquaregd = (GX. * GX) + (GY. * gy); squaregd = Squaregd + (gx1. * gx1) + (gy1. * gy1); SQUAREGD = Squaregd + (gx2. * gx2) + (gy2. * gy2); Squaregd = Squaregd + (gx3. * gx3) + (gy3. * gy3);% Run Non-maxima supression[mag,] = max (s QRT (squAREGD), [], 3);
If you look closer, you can see the number of different sizes, especially the wrinkles. If we mix all the images, we can get a slightly better edge detection.
There is not much difference between the directional filter and the non-directional filter, but we should also see a slight improvement in the results in many directions.
Edge detection: Improving color gamut
Over the past two years, I have done a lot of tests and experiments on different color domains. In particular, Lab color fields are another way to describe images. For example, we know about RGB and grayscale images, or you might even know the YUV space. The Lab Color field is very similar.
The reason I am interested in lab color is that it has excellent ability to produce the edges of the scene.
Each letter of the Lab color space represents:
- L--luminance brightness
- a--alpha--Red to Green
- b--beta--Yellow to Blue
In fact, these color channels are ideal for finding a gradient of color change, and it is natural that yellow rarely appears around another yellow, and so does red and green. (though I have proved it thoroughly). Lab color space is strongly correlated with how we perceive brightness in color. Contrary to RGB, the brightness in the lab color space has its own separation channel, which makes it better able to handle color differences, which are also related to the color of brightness.
To minimize the extra code, all we have to do is convert the input image into an image of the lab color space. You can do some optimizations, but just doing this extra step can also significantly improve the appropriate edge detection.
% Convert an image to the Lab color spacecolorTransform = makecform(‘srgb2lab‘);img = applycform(rgbImg, colorTransform);% Make it double to improve representationimg = double(img);% Find x and y derivative of a 9x9 gaussian[hx, hy] = gradient(fspecial(‘gaussian‘,[9 9],sigma));% Apply filters gx = double(imfilter(img,hx,‘replicate‘));gy = double(imfilter(img,hy,‘replicate‘));% Find absolute valuegSquared = sqrt(gx .* gx) + (gy .* gy);% Apply non-maxima suppression (find best points for edges) [mag, ] = max(gSquared, [], 3);
If we convert the background image of the Windows XP Hill to lab, we will get the following image:
Lab space images for Windows XP backgrounds
Then if we apply the filter (non-maxima suppression) We will get the image below, clearly containing the boundaries of the grassland, cloud, grassland and sky.
Edge Detection in lab space
Ultimately, if we run non-maximum suppression, we get a much better edge effect than the canny edge detector mentioned at the beginning of this article.
Lab Color Space Edge detection
On average, this method improves edge detection accuracy by around 10% compared to normal methods. This is the result of running F-measure tests on the Berkeley segmentation Dataset and Benchmark.
Edge Detection: Conclusion
There are countless ways to do edge detection, and the approach described here is by no means the best, easiest to implement, and most easily explained.
I use these methods to explain it because I am interested in them ... Plus this is UIUC's CS543 course "Computer vision" assignment, so you are also on this course, please do not copy my code!
I've put all our implementations on GitHub. Includes the OpenCV implementation in C + +.
However, if you want to take pictures with my cat, it's OK.
Suggested Reading
- Pca:principal Component Analysis
- Everyday Algorithms:pancake Sort
- Using computer Vision to Improve EEG signals
- Introduction to Markov Processes
- The Cache and multithreading
Reference documents
[1] Canny, John. "A computational approach to edge detection." Pattern analysis and Machine Intelligence, IEEE transactions on 6 (1986): 679-698.
[2] Freeman, William T., and Edward H. Adelson. "The design and use of steerable filters." IEEE Transactions on Pattern analysis and Machine Intelligence 13.9 (1991): 891-906.
This is my spare time translation, if there is a mistake, please also patience to point out, thank you!
For the original link, please click the original link
Edge Detection in computer vision