Target Detection Based on Hof transformation and generalized Hof Transformation
The previous section discussed the Target Detection Based on threshold processing. Today we will discuss the Target Detection Based on Hoff voting. Hoff voting intends to be divided into two sections, in the first section, we will briefly describe the HUF transformation and the generalized HUF transformation. They both have the same voting weights. The next section will discuss the HUF voting in the probability space, as the name suggests, their voting weights are uncertain.
First, let's take a look at the lower-Hof Transformation (Hough Transform), which is generally applicable to geometric shape object detection with analytical expressions, such as straight lines, circles, and edges. Use a unified parsing expression to express them: F (x, alpha) = 0, where X is the graph point, and Alpha is the parsing expression parameter, for example, in the European coordinate system, the linear parameters are slope m and intercept C (or theta and P in the polar coordinate system), and the circle parameters are the origin and radius. The core of HOF transformation is to transform the straight line of the image space to the parameter space (also called the Hoff space). For example, if a straight line y = mx + C is given to a point (x', y '), it is substituted into the linear equation to obtain y '= Mx' + C. In fact, it is still a straight line (in the parameter space, the slope is-x', And the intercept is y '), figure 1 shows the relationship between the image space and the image space:
(Figure 1)
(Figure 1) the left image is the image space, and the right image is the image space (parameter space). After any point on the PQ of the line segment in the image space is brought into the linear equation, you can get another straight line in the HOV space. The two are dual. For example, as shown in figure 1, after two endpoints are brought in, the two dual straight lines are obtained in the right image, and the slope is negative. Now the question is, how does the machine know that PQ is a straight line (assuming there are other noise points )? We do not know the slope or the intercept. What we need to do is assume that there is a straight line in the image, and its equation is Y = mx + C. We need his parameter, then, each point in the image space is substituted into the linear equation, and then a dual line is generated in the HOV space, because the points in the straight line in the image space share a slope m and intercept C, their dual straight lines must be at one point, that is, the values of M and C are unique, it can also be reflected in the right image.
This is the core idea of Hoff transformation detection in a straight line. However, considering that the slope-intercept linear representation cannot represent all straight lines, for example, when the line x = constant, the slope is too large to represent this point in the Hoff space, therefore, we convert a linear equation into a polar representation. The idea is the same. The difference is that the M and C parameters are converted to theta and P ), in addition, the geometric shape of the point in the image space is not a straight line, but a curve, as shown in Figure 2:
(Figure 2)
(Figure 2) the highlight of the right image is the required parameter. But how can we find the intersection? That's too much trouble. The Hoff voting method is coming soon. Is there an image in your Hoff space? I have some memory, for each vertex on a curve, I use a dictionary to represent you, And the dictionary index is retrieved using the coordinates of the Hoff space (theta and P, the value of the dictionary indicates the number of curve points on the coordinate ). The dictionary value corresponding to the highlight in (Figure 2) must be very large. In fact, we are looking for the maximum value. The peak value indicates that many points in the image space share one parameter. TheAlgorithmThe implementation steps are as follows:CodeAs shown in:
For each edge pixel (x, y) in Image
For θ =-90 to 90
P = x cos θ + y sin θ
++ H (I θ, J P );
End
End
Intuitively, you just gave me a point in the image space. I calculated a lot of dictionaries in the house space and added 1 to the same position. Simply put, the house space was separated, the finer the segmentation, the better, but the finer the segmentation, the larger the calculation workload. After all, there are many image points. This balances the reader's Grasp Based on the situation. The accumulators are shown in Figure 3:
(Figure 3)
Both the circle and the ellipse are similar steps, except that the circle has three parameters, and the elliptic has five parameters, resulting in a greater amount of computing. This algorithm is also implemented in opencv. You can call a function directly. In this case, you can directly detect the result of a straight line and the image of opencv, as shown in figure 4 and 5:
(Figure 4)
(Figure 5)
(Figure 5) The peak points in the HOV space may not be strictly in the same coordinate, that is, the points on the straight line in the image space are not strictly in the same line, we need to allow the existence of some errors, and it is a skill to judge the peak point.
Next, we will go to Generalized HOUGH transform ).
As mentioned above, the HOF transform is only applicable to the shape with an analytical expression. It is powerless for general shapes. General shapes do not have an analytical expression and no analytical expression. How can we get into the HOF space? How can we vote if there is no Hov space? How can I detect objects without voting? If conditions are met, you can create conditions without conditions. Although general objects do not have resolution expressions, they have edges and more. Professor Marr initially said that edges are one of the important conditions for human eyes to detect and Judge objects, therefore, many Edge Detection Algorithms emerged in early computer vision. With the edge, you can create a cut vector. Can a cut vector be used as a parameter? Considering that there is too little information, Ballard proposed the generalized
And uses parameters as shown in figure 6:
(Figure 6)
As shown in figure 6, Alpha indicates parameters. There are many parameters, and the parameter space must be high-dimensional. However, no matter how many dimensions, the method is similar to the above. Do you vote in the parameter space, considering that this method selects a reference point and uses the distance and relative Azimuth Information from the point to the reference point in the image as a parameter to detect objects, careful students may feel it, he and Hu Moment (Hu Moment) are somewhat similar. They all need some templates to extract shape parameters, but they use more information than Hu Moment, in addition, the voting in the HOV space can resist the influence of noise and occlusion factors, so his robustness is better. The voting process is shown in Figure 7, where a [a] is the voting Dictionary (accumulators ):
(Figure 7)
Its detection effect is shown in Figure 8:
(Figure 8)
This section lays the foundation for the application of Hoff voting in computer vision in the next section, because today's voting is "fair" and each vote has the same weight, when a complex problem occurs, for example, when the component parameter is used for voting, the weight needs to be treated differently, which is put in the next section. The code of the generalized Hof transformation is also available on the Internet. If you cannot find the code, you can leave a message to ask for it.
References:
Generalizing the Hough transform to detectarbitrary shapes
Reprinted please indicate Source: http://blog.csdn.net/cuoqu/article/details/9071405