RPM: An understanding of image processing convolution

Source: Internet
Author: User

An understanding of image processing convolution

One: what is convolution

The mathematical formula for discrete convolution can be expressed as follows:

f (x) =-where C (k) represents the convolution operand, g (i) represents the sample data, and F (x) represents the output result.

Examples are as follows:

Suppose G (i) is a one-dimensional function, and the number of samples represented is g = [1,2,3,4,5,6,7,8,9]

Suppose C (k) is a one-dimensional convolution operand with an operand of c=[-1,0,1]

The output F (x) can be represented as f=[1,2,2,2,2,2,2,2,1]//boundary data not processed

The above is only one-dimensional case, when a two-dimensional digital image convolution, its mathematical significance can be explained as follows:

The source image is used as the input source data, the image processed later is the convolution output result, the convolution operand as the filter

A convolution operation is performed on each pixel point of the source image in xy two directions. :

Each time the pink squares advance a pixel in X/y, a new output pixel is generated, and the dark blue generation

The table is going to output the pixel squares, and after all the pixel squares, all the output pixels are obtained.

In the figure, the pink matrix represents the convolution operand matrix, and black represents the source image – each square represents a pixel point.

Second: convolution in Digital image processing application

A digital image can be regarded as a discrete function of a two-dimensional space can be expressed as f (x, y), assuming that there is a two-dimensional convolution

as function C (U, v), will produce the output image g (x, Y) = f (x, y) *c (u,v), the use of convolution can be achieved on the image of fuzzy processing, edge detection, produce the effect of embossing image.

A simple digital image convolution process can be as follows:

1. Reading the source image pixels

2. Applying the convolution operand matrix to produce the target image

3. Normalization of the target image

4. Processing boundary pixels

Three: A pure Java convolution blur image effect

Four: Key code interpretation

Complete the convolution calculation code for the RGB color of the pixel points as follows:

Red color

OUT3DDATA[ROW][COL][1] =in3ddata[row][col][1] +

IN3DDATA[ROW-1][COL][1] +

IN3DDATA[ROW+1][COL][1] +

IN3DDATA[ROW][COL-1][1] +

IN3DDATA[ROW-1][COL-1][1] +

IN3DDATA[ROW+1][COL-1][1] +

IN3DDATA[ROW][COL+1][1] +

IN3DDATA[ROW-1][COL+1][1] +

IN3DDATA[ROW+1][COL+1][1];

Green color

OUT3DDATA[ROW][COL][2] =in3ddata[row][col][2] +

IN3DDATA[ROW-1][COL][2] +

IN3DDATA[ROW+1][COL][2] +

IN3DDATA[ROW][COL-1][2] +

IN3DDATA[ROW-1][COL-1][2] +

IN3DDATA[ROW+1][COL-1][2] +

IN3DDATA[ROW][COL+1][2] +

IN3DDATA[ROW-1][COL+1][2] +

IN3DDATA[ROW+1][COL+1][2];

Blue color

OUT3DDATA[ROW][COL][3] =in3ddata[row][col][3] +

IN3DDATA[ROW-1][COL][3] +

IN3DDATA[ROW+1][COL][3] +

IN3DDATA[ROW][COL-1][3] +

IN3DDATA[ROW-1][COL-1][3] +

IN3DDATA[ROW+1][COL-1][3] +

IN3DDATA[ROW][COL+1][3] +

IN3DDATA[ROW-1][COL+1][3] +

IN3DDATA[ROW+1][COL+1][3];

The code for calculating the normalization factor and the normalization of the convolution results is as follows:

Find the peak data frominput and output pixel data.

int inpeak = 0;

int outpeak = 0;

for (int row=0; row<srch; row++) {

for (int col=0; col<srcw; col++) {

if (Inpeak < in3ddata[row][col][1]) {

Inpeak = in3ddata[row][col][1];

}

if (Inpeak < in3ddata[row][col][2]) {

Inpeak = in3ddata[row][col][2];

}

if (Inpeak < in3ddata[row][col][3]) {

Inpeak = in3ddata[row][col][3];

}

if (Outpeak < out3ddata[row][col][1]) {

Outpeak = out3ddata[row][col][1];

}

if (Outpeak < out3ddata[row][col][2]) {

Outpeak = out3ddata[row][col][2];

}

if (Outpeak < out3ddata[row][col][3]) {

Outpeak = out3ddata[row][col][3];

}

}

}

Normalization

double Outputscale = ((double) inpeak)/((double) outpeak);

for (int row=0; row<srch; row++) {

for (int col=0; col<srcw; col++) {

OUT3DDATA[ROW][COL][1] = (int) (Outputscale * out3ddata[row][col][1]);

OUT3DDATA[ROW][COL][2] = (int) (Outputscale * out3ddata[row][col][2]);

OUT3DDATA[ROW][COL][3] = (int) (Outputscale * out3ddata[row][col][3]);

}

}

V: Content not mentioned in this article – Boundary pixel processing

There are two ways to deal with edge pixels without processing edge pixels.

One is the direct Fill method – the boundary pixel padding beyond the boundary portion.

The second is linear interpolation – a i/row pixel padding that extends beyond the boundary portion.

RPM: An understanding of image processing convolution

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.