Pattern Recognition---rectangular canvas or paper and extract image content

Source: Internet
Author: User

Based on an answer. The problem is as follows: In a photograph, it is known that there is a rectangular object, but after a perspective projection, it is no longer a rectangular rectangle, so how to extract the contents of this graph? This is a very common scene, such as in the museum to see a very like painting, with a mobile phone to find down, but home a look at crooked, brain complement the original painting content and feel wrong, then need algorithm assistance to extract the original content from the image. The application of the scene can be divided into the following: the coordinates of the four corners of the paper (Red dots in the picture) is known as the above 4 red dots on the left is accurate to obtain, such as manual labeling, then simple: with OPENCV perspective transform can. The steps are as follows: 1) put the labeled four point coordinates into a variable called corner, such as the above example, the resolution of the original image is 300x400, the direction of the definition of x and Y are as follows: then the coordinates of the four corners of the paper are: Upper left: 157.6, 71.5

Upper right: 295.6, 118.4

Lower right: 172.4, 311.3

Lower left: 2.4, 202.4

Place these four coordinates in a variable called corner in the order above. If we are going to restore this pattern to a 300x400 image, then place the following four coordinates in a variable called canvas in the corresponding order: top left: 0, 0

Upper right: 300, 0

Lower right: 300, 400

Lower left: 0, 400

Assuming that the original image has been read into a variable called OPENCV, then the code for extracting the paper pattern is as follows:
1 M = cv2.getperspectivetransform (Corners, canvas) 2 result = cv2.warpperspective (image, M, (0, 0))
Cut out the left-hand image and try to remove the red dots, and the results are as follows: Of course, this step can be used in Photoshop. cases where the coordinates of the corners of the paper are unknown or difficult to accurately labelThis scenario may be a small screen application, or the original image is very small, such as the 300x400 example I used here, point coordinates are difficult to accurately label. In this case, one idea is to use edge detection to extract four sides of the paper, and then find the coordinates of the four corners, then do perspective Transform. 1) Image preprocessingIn general even do ordinary edge detection also need to advance the image denoising to avoid mis-testing, such as the most common method is to first Gaussian filter image, but this will also cause the image to become blurred, when the edge of the image to be detected is not obvious, or the picture itself is not high resolution (such as the example used in this article), The edge strength to be detected is also sacrificed at the same time as noise reduction. Specific to this article, the paper is white, the background is light yellow belt, if the Gaussian filter is obviously not possible, this time an alternative is to consider the advantage of using the Mean Shift,mean Shift is that if it is like the background of the desktop color texture, The process of image segmentation is equivalent to filtering out these small floats and preserving the relatively obvious edges of the paper as follows: Original after processing Meanshift's Code:
1 image = Cv2.pyrmeanshiftfiltering (image, 25, 10)

Because the primary purpose is to pre-process noise reduction, windows size and color distance are not too large to avoid wasting computational time and excessive noise reduction. After noise reduction, you can see that the textures on the table are erased, and the edges of the paper are much cleaner. This is not enough, however, that the pattern itself, and the other objects in the image, have many distinct edges and are straight edges.

2) Paper Edge detection

Although the noise is reduced, there are still a lot of obvious edge elements in the image. How to try to keep only the edge of the paper, this time you can consider the segmentation algorithm, the image is divided into paper parts and other parts, so that the edge of the mask and the edge of the paper should be almost coincident. Here you can consider using Grabcut, so for simple cases, such as paper or canvas and background contrast strong, directly the image edge of the pixel as bounding box can be automatically segmented. When the automatic segmentation is not accurate to introduce manual auxiliary segmentation, specific to the example I use here, the background and picture close, so need manual assistance: the results are as follows: You can see that the results of the segmentation, although the basic distinction between the paper shape, but the edge is not accurate, in addition to the keyboard and some desktop can not distinguish. You can then continue to use the Grabcut+ manual label to get only paper segmentation. Or for user-friendly, as little as possible to introduce manual assistance, then you can consider the next step to detect the edge, and then do post-processing. Suppose we consider the latter, then we get the following mask: This mask is not accurate, so it is not directly used for edge detection, but it roughly marks the most obvious edge position in the picture, so consider the following idea: Preserving the information near the mask edge after noise reduction for real edge detection , and the other parts are blurred, that is, based on the mask above to make the following mask for obfuscation: The image obtained from this mask for edge detection is as follows: The edge is detected by the canny operator as follows: 3) Line detectionUsing the Hough transform to detect the line on the detected edge, my example uses CV2. HOUGHLINESP, resolution 1 pixels and 1°, can be set to detect the threshold value according to the image size and minlinelength remove most of the false detection. In particular, if the result structure of Python BINDING,OPENCV 2 and OpenCV 3 using OpenCV is not the same, the code porting needs to be modified accordingly. The results are as follows: As you can see, some of the lines are almost coincident together, which is unavoidable, with a total of 9 lines detected, with two pairs (bottom and right edges) coincident. The coincidence line can be judged by distance judgment and a straight line relative angle: the remaining lines are not coincident. 4) Determine the edge of the paper so how to choose the edge of the paper four floss (even if the image segmentation process very good score open the paper and other parts, which in some cases is still unavoidable, compared to the case with the edge parallel lines), You can sample the grayscale of pixels along the sides of the extracted segment: The values of the left and right pixels are sampled evenly between the two endpoints of the segment, because, in general, the color of edges and backgrounds should be similar on four sides, if it is a paper or canvas. However, in doing so, introducing another problem is the need to differentiate between the "left" and "right" segments, which are to be differentiated before and after the segment itself. So it is necessary to sort all the line ends in the picture, and this sort of datum is relative to the canvas. The example in this article is to define the center of the image as the "left" side of all the segments, such as. And the decision line ends "before" and "after" can be used as follows: First, assume the front and back end of the line segment, the coordinates of the two endpoint coordinates minus the center point (Red Point), and then the resulting two vectors A and B cross product, if the cross product is greater than 0 is assumed to be correct, if the <0 exchange the assumption of The order of the end points of the segment can be sampled, for simplicity it is possible to sample the gray values of the left and right sides separately, and if you want to compare the values of the RGB channels more accurately, here is the median distribution of the pixel grayscale on both sides of the 7 segments: you can see that there are 4 points close to each other (red), It shows that their pixel grayscale distribution is also very close, the 4 are selected, the results are as follows:

It is the result that is to be.

5) Calculate the coordinates of the Four Corners

Next calculate the intersection of four lines, the method point here. Because there are 4 lines, you get 6 results, because in this scenario, the square object does not have a concave angle under the perspective transform, so the two intersections that are farthest from the center of the paper get the coordinates of four corners, and the result is as follows:

This will return to the beginning of the four-dimensional coordinates have been obtained, the direct transformation of the perspective of the line.

Camera calibration?

Writing so much, there is actually a vital hypothesis, even one of the most critical steps I have never mentioned, that is camera calibration, if there is a camera, meta data knows, then need to sit camera Calibration to know the original size of the paper or canvas. The example I tried here is of course not, and there are, the corresponding algorithm OPENCV also has a ready-made, but even so it is very troublesome, so all my processes are the default original size has been obtained. Moreover, even if not, after changing back to square, the user feels that the simple axis scaling is much more convenient than the camera calibration.

Impressionist

The example I used was slightly extreme, because the background and pattern were very close, and the resolution was very low on the other. On the Internet search, I found a young painter's impressionist works to try:

Original

Manual labeling

Grabcut

Detected edges

Results

It looks pretty good ~

Http://www.cnblogs.com/frombeijingwithlove/p/4226489.html

Http://www.tuicool.com/articles/6zq2aq Sift

Pattern Recognition---rectangular canvas or paper and extract image content

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.