Image processing-Hof Transformation (linear detection algorithm)
Hof transform is one of the classic methods in image transformation. It is mainly used to separate the ry with some same features from the image.
SHAPE (such as straight lines and circles ). Compared with other methods, the HOF Transform Method for Finding the straight line and the circle can better reduce the noise.
Sound interference. Classic Hof transformations are often used to detect straight lines, circles, and ovans.
Hof transformation algorithm idea:
Taking straight line detection as an example, the coordinate points of each pixel are transformed into a unified measurement that contributes to the Chengdu linear characteristics. This is a simple
Example: A straight line is a set of discrete points in an image,
The geometric Equations of discrete points that can express a straight line are as follows:
X * Cos (theta) + y * sin (theta) = r where the angle Theta refers to the angle between R and the X axis, and r is the vertical line
Straight distance. Any point on a straight line, X and Y can be expressed, where R and theta are constants. The formula is as follows:
However, in the field of image processing, the pixel coordinate p (x, y) of the image is known, while R and theta are what we are looking.
. If we can plot each (R, theta) value according to the pixel coordinate p (x, y) value, then from the image flute card
The ER coordinate system is converted to the Hoff Space System of polar coordinates. The transformation from point to curve is called the Hoff transformation of a straight line. Transform
Quantize the space of the HOF parameter to a finite number of values at intervals or accumulate the lattice. When the HOF transform algorithm starts, each pixel
The coordinate point P (x, y) is converted to the curve point (R, theta) and accumulated to the corresponding grid data point. When a peak appears
A straight line exists. The same principle can be used to detect circles, but the parameter equation of a circle is changed
Equation:
(X-a) ^ 2 + (Y-B) ^ 2 = R ^ 2 where (a, B) is the center coordinate of the circle, and the radius of the r circle. In this way, the parameter space of Hov is
It becomes a 3D Parameter Space. The given circle radius is converted to the two-dimensional Hof parameter space, which is relatively simple and commonly used.
Analysis of programming ideas:
1. Read a binary image with a processing tape. It is best to set the background to black.
2. Obtain the source pixel data
3. Complete the HOF transformation based on the formula of the straight line, and preview the results of the HOF space.
4.Find the max. Hoff value, set the threshold, and reverse convert it to the RGB value space of the image.(One of the procedural difficulties)
5. Cross-border processing, displaying images processed by Hof Transformation
Key code analysis:
The conversion angle of a straight line is [0 ~ Pi], set the equal order to 500 to PI/500, and according to the value of the Parameter Linear Parameter Equation
The range is [-R, R], which has the following Hof Parameter definition:
// prepare for hough transform int centerX = width / 2; int centerY = height / 2; double hough_interval = PI_VALUE/(double)hough_space; int max = Math.max(width, height); int max_length = (int)(Math.sqrt(2.0D) * max); hough_1d = new int[2 * hough_space * max_length];
Implementation from PixelRGBThe code for converting a space to a hohoff space is:
// start hough transform now....int[][] image_2d = convert1Dto2D(inPixels);for (int row = 0; row < height; row++) {for (int col = 0; col < width; col++) { int p = image_2d[row][col] & 0xff; if(p == 0) continue; // which means background color // since we does not know the theta angle and r value, // we have to calculate all hough space for each pixel point // then we got the max possible theta and r pair. // r = x * cos(theta) + y * sin(theta) for(int cell=0; cell < hough_space; cell++ ) { max = (int)((col - centerX) * Math.cos(cell * hough_interval) + (row - centerY) * Math.sin(cell * hough_interval)); max += max_length; // start from zero, not (-max_length) if (max < 0 || (max >= 2 * max_length)) {// make sure r did not out of scope[0, 2*max_lenght] continue; } hough_2d[cell][max] +=1; } }}
The code for calculating the max. Hoff threshold is as follows:
// find the max hough valueint max_hough = 0;for(int i=0; i // transfer back to image pixels space from hough parameter space int hough_threshold = (int)(threshold * max_hough); for(int row = 0; row < hough_space; row++) { for(int col = 0; col < 2*max_length; col++) { if(hough_2d[row][col] < hough_threshold) // discard it continue; int hough_value = hough_2d[row][col]; boolean isLine = true; for(int i=-1; i<2; i++) { for(int j=-1; j<2; j++) { if(i != 0 || j != 0) { int yf = row + i; int xf = col + j; if(xf < 0) continue; if(xf < 2*max_length) { if (yf < 0) { yf += hough_space; } if (yf >= hough_space) { yf -= hough_space; } if(hough_2d[yf][xf] <= hough_value) { continue; } isLine = false; break; } } } } if(!isLine) continue; // transform back to pixel data now... double dy = Math.sin(row * hough_interval); double dx = Math.cos(row * hough_interval); if ((row <= hough_space / 4) || (row >= 3 * hough_space / 4)) { for (int subrow = 0; subrow < height; ++subrow) { int subcol = (int)((col - max_length - ((subrow - centerY) * dy)) / dx) + centerX; if ((subcol < width) && (subcol >= 0)) { image_2d[subrow][subcol] = -16776961; } } } else { for (int subcol = 0; subcol < width; ++subcol) { int subrow = (int)((col - max_length - ((subcol - centerX) * dx)) / dy) + centerY; if ((subrow < height) && (subrow >= 0)) { image_2d[subrow][subcol] = -16776961; } } } } }
The HOF conversion source diagram is as follows:
After the HOF transformation, the following information is displayed in the HOF space:(White indicates that a straight line signal has been found)
The final result of reverse conversion back to pixel space is as follows:
Result of a better running monitoring line (input is a binary image ):
The source code of the complete Hov transform is as follows:
package com.gloomyfish.image.transform;import java.awt.image.BufferedImage;import com.process.blur.study.AbstractBufferedImageOp;public class HoughLineFilter extends AbstractBufferedImageOp {public final static double PI_VALUE = Math.PI;private int hough_space = 500;private int[] hough_1d;private int[][] hough_2d;private int width;private int height;private float threshold;private float scale;private float offset;public HoughLineFilter() {// default hough transform parameters//scale = 1.0f;//offset = 0.0f;threshold = 0.5f;scale = 1.0f;offset = 0.0f;}public void setHoughSpace(int space) {this.hough_space = space;}public float getThreshold() {return threshold;}public void setThreshold(float threshold) {this.threshold = threshold;}public float getScale() {return scale;}public void setScale(float scale) {this.scale = scale;}public float getOffset() {return offset;}public void setOffset(float offset) {this.offset = offset;}@Overridepublic BufferedImage filter(BufferedImage src, BufferedImage dest) {width = src.getWidth(); height = src.getHeight(); if ( dest == null ) dest = createCompatibleDestImage( src, null ); int[] inPixels = new int[width*height]; int[] outPixels = new int[width*height]; getRGB( src, 0, 0, width, height, inPixels ); houghTransform(inPixels, outPixels); setRGB( dest, 0, 0, width, height, outPixels ); return dest;}private void houghTransform(int[] inPixels, int[] outPixels) { // prepare for hough transform int centerX = width / 2; int centerY = height / 2; double hough_interval = PI_VALUE/(double)hough_space; int max = Math.max(width, height); int max_length = (int)(Math.sqrt(2.0D) * max); hough_1d = new int[2 * hough_space * max_length]; // define temp hough 2D array and initialize the hough 2D hough_2d = new int[hough_space][2*max_length]; for(int i=0; i