OpenCV operation on pixels

Source: Internet
Author: User
Tags first row
first, access the pixel value

You can access the element using the mat's at function. Because the mat can accept any type of element, the AT function is implemented as a template function, which must specify the type of the image element when called:

Image.at<uchar> (j,i) =0;
Or, for color image
image.at<cv::vec3b> (j,i) [channel]=0;

The channel index is used to indicate one of the three channels. Because the color image has 3 channels, the pixels that access the color image return a vector. OPENCV defines the short vector as Cv::vec3b, which represents 3 8-bit values. There are also vectors for other element types, for example, floating-point type cv::vec3f. For integer type, the last letter is replaced with I; for the short integer, replace with S; for double precision, replace with D.

To avoid a lengthy sense of the at function, you can use Cv::mat's template subclass Cv::mat_, which accesses elements directly through operator ():

Cv::mat_<uchar> im (image);
Im (+) = 0;
Or, for color images
cv::mat_<cv::vec3b> im (image);
Im (+) [channel]= 0;

It is worth mentioning that the at method is suitable for random access to pixel points, and when it comes to pixel traversal, consider the performance of the processing and should use a more efficient approach. second, use the pointer to traverse the pixel

int nl = image.rows;
int NC = Image.cols*image.channels ();
for (int j = 0; J < nl;j++) {
    uchar* pt = image.ptr<uchar> (j);
    for (int i = 0; i < NC; i++) {
        //pixels for some operation
        pt[i]=0;}
 }

If you figure out the order of the pixel value data in the color image, the above code is easy to understand. The first 3 bytes of the image data buffer represent the three-color channel of the upper-left pixel, and the next 3 bytes are the second pixel of the first row, and so on.

For continuous images, the above code can be rewritten as a single for loop because the address of the entire pixel data store for the image is contiguous:

uchar* pt = image.ptr<uchar> (0);
for (int i = 0; i < image.rows*image.cols*image.channels (); i++) {
        //some operations on pixels
        pt[i]=0;
}

Here to clarify a concept, what is a continuous image. The memory required for a wide W high H image is WxHx3 Uchar. However, for performance reasons, we will use a few extra pixels to fill the length of the line, because some chips handle the image, if the length of the line is 4 or 8 integer times, the processing performance will be higher. So the OPENCV MAT data structure will have a property: step. The literal translation is the stride length, which should be called the effective width. When the image is a continuous image, i.e. there is no invalid data to fill the row, the effective width equals the actual image width.

To check the continuity of the image, you can do this:

Check the length of the row (in bytes) and the number of columns x bytes of individual pixels equal
if (image.step==image.cols*image.elemsize ()) {}
third, iterate through the pixel with an iterator
Cv::mat_<uchar>::iterator it=image.begin<uchar> ();
Cv::mat_<uchar>::iterator itend=image.end<uchar> ();
for (; it!=itend;it++) {
    //some operations on pixels
    (*it) =0;
}

Iterators can also define:cv:matiterator_<uchar> it;

The main purpose of using iterators is to simplify the code and reduce the likelihood of errors, and the test shows that the time efficiency is worse than the way the pointer operates. Iv. Pixel traversal under a neighborhood operation or block operation

This assumes that you need to access both the current pixel and the adjacent pixels up and down.

int nchannels=image.channels ();
Handles all rows except the first and last rows for
(int j=1;j<image.rows-1;j++) {
    uchar* previous=image.ptr<uchar> (j-1);//Previous line
    uchar* current=image.ptr<uchar> (j);//Current line
    uchar* next=image.ptr<uchar> (j+1);//Next line
    // Remove the first column and the last column do not process for
    (int i=nchannels;i< (image.cols-1) *nchannels;i++) {
        current[i]=0;//current pixel
        current[ i-nchannels]=0;//adjacent left pixels
        current[i+nchannels]=0;//adjacent right pixels
        previous[i]=0;//adjacent pixels
        next[i]=0;//adjacent pixels
    }
}

When a pixel neighborhood is computed, it is usually represented by an accounting sub-domain. The size of the accounting sub is the neighborhood size (such as 3x3), accounting for each cell represents the multiplication factor of the relevant pixels, the results of the pixel application accounting sub-is the sum of these products. The operation of the accounting sub-definition is called a kernel operation (similar to convolution), which is called filtering. The OPENCV defines the relevant function, namely cv::filter2d. v. Operation Pixel value (filter, mix, etc.)

Here the two images are weighted to mix, you can use the cv::addweighted function: cv::addweighted (image1,0.4,image2,0.7,0.,result);

Here are two input images:

The result of the blending is:

In OpenCV2, most of the arithmetic functions have corresponding overloaded operators. Therefore, the weighted mixture can also be written as: Result=0.4*image1+0.9*image2;

Note that the result of the addition does not cause the output pixel value to exceed 255 because the function calls the Cv::saturate_cast function. This function is used in the case of pixel operations to ensure that the result is within the predetermined pixel range. Use a method similar to a type conversion in C + +, such as results.at<uchar> (J,i) =cv::saturate_cast<uchar> (0.9*image1.at<uchar> (j,i) + 0.9*image2.at<uchar> (j,i)); Five, Operation Pixel position (remapping, deformation, etc.)

This type of operation does not modify the pixel values, but instead maps the position of each pixel to the new location. You need to use the remap function to OpenCV.

Mapping parameters
Cv::mat SRCX (image.rows,image.cols,cv_32f);
Cv::mat Srcy (Image.rows, Image.cols, cv_32f);
Create a mapping parameter for
(int j = 0; J < image.rows;j++) {for
    (int i = 0; i < Image.cols; i++) {
        SRCX.AT<FLOAT&G t; (j, i) = i;//remains in the same column
        srcy.at<float> (j, i) = J+7*sin (i/10.0);//remains in the same column
    }
}
cv::mat result;
Cv::remap (image,result,srcx,srcy,cv::inter_linear);

To build a new image, you need to know the original location of each pixel of the new image in the source image. So a mapping function is needed that can get the original position of the pixel based on the new position of the pixel, which is called reverse mapping. Can be described with two mapping parameters, here is Srcx,srcy, one for the x-coordinate, one for the y-coordinate. It is worth noting that the parameter contains a floating-point number, which means that the coordinates after the reverse mapping may not be integers, which requires the use of pixel interpolation techniques, the last parameter of the Remap function indicates the interpolation method used.

Here is a pair of input images:

The resulting image of the running program is:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.