OpenMP parallel Programming Application-accelerating OPENCV image stitching algorithm

Source: Internet
Author: User

OpenMP is a parallel programming approach for multiprocessor programming that provides a high-level abstraction of parallel programming. Simply by adding simple instructions to your program, you can write efficient parallel programs without worrying about detailed parallel implementation details. Reduces the difficulty and complexity of parallel programming. It is also due to the ease of use of OpenMP, which is not suitable for situations where there is a need for complex thread synchronization and mutual exclusion.


An algorithm for image stitching using SIFT or surf features in OpenCV. The feature extraction and feature description of two or more images are required, and then the matching of feature points is performed. Image transformation and other operations. The work of feature extraction and descriptive narration of different images is the most time-consuming and independent execution in the whole process, and can be accelerated using OpenMP.


Here is the original program for SIFT image stitching that does not use OpenMP acceleration:

#include "highgui/highgui.hpp" #include "opencv2/nonfree/nonfree.hpp" #include "opencv2/legacy/legacy.hpp" #includ E "Omp.h" using namespace cv;//calculates the original image point position point2f gettransformpoint (const point2f originalpoint, const, after matrix transformation) on the target image Mat &transformmaxtri); int main (int argc, char *argv[]) {Float startTime = omp_get_wtime (); Mat image01 = Imread ("test01.jpg"); Mat image02 = Imread ("test02.jpg"), imshow ("stitching image 1", image01), imshow ("stitching image 2", image02);//Grayscale conversion Mat image1, image2; Cvtcolor (IMAGE01, Image1, Cv_rgb2gray); Cvtcolor (Image02, Image2, Cv_rgb2gray);//extract feature points Siftfeaturedetector  Siftdetector (800); Sea slug matrix threshold vector<keypoint> keyPoint1, Keypoint2;siftdetector.detect (Image1, keyPoint1); Siftdetector.detect ( Image2, KeyPoint2);//feature point description narrative, for the below feature point matching to prepare Siftdescriptorextractor siftdescriptor; Mat ImageDesc1, Imagedesc2;siftdescriptor.compute (Image1, KeyPoint1, IMAGEDESC1); Siftdescriptor.compute (Image2, KeyPoint2, IMAGEDESC2); float endTime = Omp_get_wtime (); Std::cout << "Do not use OPenmp accelerated consumption time: << endtime-starttime << std::endl;//get matching feature points. And extract the optimal pairing Flannbasedmatcher matcher;vector<dmatch> matchepoints;matcher.match (IMAGEDESC1, IMAGEDESC2, Matchepoints, Mat ()); Sort (Matchepoints.begin (), Matchepoints.end ()); Feature point ordering//getting the best matching feature points ranked in the top n vector<point2f> imagePoints1, imagepoints2;for (int i = 0; i < x i++) {imagepoints 1.push_back (keypoint1[matchepoints[i].queryidx].pt); Imagepoints2.push_back (Keypoint2[matchepoints[i].trainidx] . PT);} Get the projection mapping matrix for image 1 to Image 2, size 3*3 Mat Homo = findhomography (ImagePoints1, imagePoints2, CV_RANSAC); Mat Adjustmat = (mat_<double> (3, 3) << 1.0, 0, Image01.cols, 0, 1.0, 0, 0, 0, 1.0); Mat Adjusthomo = adjustmat*homo;//Get the strongest pairing point in the original image and the matrix transform the corresponding position on the image, used for image stitching point positioning point2f originallinkpoint, Targetlinkpoint, Basedimagepoint;originallinkpoint = Keypoint1[matchepoints[0].queryidx].pt;targetlinkpoint = GetTransformPoint ( Originallinkpoint, Adjusthomo); basedimagepoint = keypoint2[matchepoints[0].trainidx].pt;//Image Registration Mat imagetransform1;warpperspective (image01, ImageTransform1, Adjustmat*homo, Size (image02.cols + image01.cols + 110, image02.rows), or//in the overlapping area on the left side of the strongest match point to accumulate. is a stable transition of cohesion. Eliminate mutant Mat Image1overlap, Image2overlap; The overlapping portions of Figure 1 and figure 2 Image1overlap = ImageTransform1 (Rect (targetlinkpoint.x-basedimagepoint.x, 0), point (Targetlinkpo Int.x, Image02.rows)); Image2overlap = Image02 (Rect (0, 0, Image1overlap.cols, image1overlap.rows));  Mat image1roicopy = Image1overlap.clone (); Copy the overlapping portion of Figure 1 for (int i = 0; i < image1overlap.rows; i++) {for (int j = 0; J < Image1overlap.cols; J + +) {Double Weig  Ht;weight = (double) j/image1overlap.cols; Image1overlap.at<vec3b> (i, J) [0] = (1-weight) *image1roicopy.at<vec3b> (i, J) [0] + weight*, with change in distance Image2overlap.at<vec3b> (i, J) [0];image1overlap.at<vec3b> (I, J) [1] = (1-weight) *image1roicopy.at< Vec3b> (i, J) [1] + weight*image2overlap.at<vec3b> (i, J) [1];image1overlap.at<vec3b> (I, J) [2] = (1-weight ) *image1roicopy.At<vec3b> (i, J) [2] + weight*image2overlap.at<vec3b> (i, J) [2];}}  Mat Roimat = Image02 (Rect (Point (image1overlap.cols, 0), point (Image02.cols, image02.rows))); The non-coincident part of Figure 2 Roimat.copyto (Mat (ImageTransform1, Rect (targetlinkpoint.x, 0, Roimat.cols, image02.rows))); The non-coincident parts are directly connected to the Namedwindow ("stitching results", 0); Imshow ("Stitching results", ImageTransform1) imwrite ("d:\\ stitching results. jpg", imageTransform1); Waitkey (); return 0;} Calculates the original image point position point2f gettransformpoint (const point2f originalpoint, const Mat &transformmaxtri) on the target image after the matrix transformation Mat originelp, Targetp;originelp = (mat_<double> (3, 1) << originalpoint.x, ORIGINALPOINT.Y, 1.0); TARGETP = tr Ansformmaxtri*originelp;float x = targetp.at<double> (0, 0)/targetp.at<double> (2, 0); float y = targetP.at& Lt;double> (1, 0)/targetp.at<double> (2, 0); return point2f (x, y);}


Image one:



Image two:



Stitching results:



using OpenMP on my machine is an average time consuming 4.7S.


Using OpenMP is also very easy. VS has built-in support for OpenMP. On the project, right-click Properties, Configuration Properties,->c/c++-> language->openmp support, select Yes:



After that, add OpenMP's header file "Omp.h" to the program:

#include "highgui/highgui.hpp" #include "opencv2/nonfree/nonfree.hpp" #include "opencv2/legacy/legacy.hpp" #includ E "Omp.h" using namespace cv;//calculates the original image point position point2f gettransformpoint (const point2f originalpoint, const, after matrix transformation) on the target image Mat &transformmaxtri); int main (int argc, char *argv[]) {Float startTime = omp_get_wtime (); Mat image01, IMAGE02; Mat Image1, image2;vector<keypoint> keyPoint1, KeyPoint2; Mat ImageDesc1, IMAGEDESC2;  Siftfeaturedetector Siftdetector (800); Sea slug matrix threshold siftdescriptorextractor siftdescriptor;//using OpenMP sections guidance commands to turn on multithreading #pragma omp parallel sections {#pragma Omp section {image01 = Imread ("test01.jpg") imshow ("Stitching image 1", image01);//Grayscale Conversion cvtcolor (IMAGE01, Image1, Cv_rgb2gray);// Extracting feature points siftdetector.detect (Image1, keyPoint1);//feature point descriptive narrative. Prepare Siftdescriptor.compute (Image1, KeyPoint1, IMAGEDESC1) for the feature points matching below.} #pragma omp section {image02 = Imread ("test02.jpg"), imshow ("stitching image 2", IMAGE02), Cvtcolor (Image02, Image2, Cv_rgb2gray); Siftdetector.detect (Image2, KeypoiNT2); Siftdescriptor.compute (Image2, KeyPoint2, IMAGEDESC2);}} float endTime = omp_get_wtime () std::cout << "use OpenMP for accelerated consumption time:" << endtime-starttime << std::endl;//get Match feature points. And extract the optimal pairing Flannbasedmatcher matcher;vector<dmatch> matchepoints;matcher.match (IMAGEDESC1, IMAGEDESC2, Matchepoints, Mat ()); Sort (Matchepoints.begin (), Matchepoints.end ()); Feature point ordering//getting the best matching feature points ranked in the top n vector<point2f> imagePoints1, imagepoints2;for (int i = 0; i < x i++) {imagepoints 1.push_back (keypoint1[matchepoints[i].queryidx].pt); Imagepoints2.push_back (Keypoint2[matchepoints[i].trainidx] . PT);} Gets the projection mapping matrix for image 1 to Image 2. Dimensions are 3*3 Mat homo = findhomography (ImagePoints1, imagePoints2, CV_RANSAC); Mat Adjustmat = (mat_<double> (3, 3) << 1.0, 0, Image01.cols, 0, 1.0, 0, 0, 0, 1.0); Mat Adjusthomo = adjustmat*homo;//Gets the position of the strongest pairing point on the image after the original image and Matrix transform. For image Stitching point positioning point2f originallinkpoint, targetlinkpoint, basedimagepoint;originallinkpoint = keypoint1[matchepoints[0] . queryidx].pt;targetlinkpoint = Gettransformpoint (Originallinkpoint, adjusthomo); basedimagepoint = keypoint2[matchepoints[0].trainidx].pt;//Image Registration Mat imagetransform1;warpperspective (image01, ImageTransform1, Adjustmat*homo, Size (image02.cols + image01.cols + 110, image02.rows);//in the overlap area on the left side of the strongest match point is accumulated, is a stable transition of cohesion, elimination of mutant Mat Image1overlap, Image2overlap; The overlapping portions of Figure 1 and figure 2 Image1overlap = ImageTransform1 (Rect (targetlinkpoint.x-basedimagepoint.x, 0), point (Targetlinkpo Int.x, Image02.rows)); Image2overlap = Image02 (Rect (0, 0, Image1overlap.cols, image1overlap.rows));  Mat image1roicopy = Image1overlap.clone (); Copy the overlapping portion of Figure 1 for (int i = 0; i < image1overlap.rows; i++) {for (int j = 0; J < Image1overlap.cols; J + +) {double weigh  T;weight = (double) j/image1overlap.cols; Image1overlap.at<vec3b> (i, J) [0] = (1-weight) *image1roicopy.at<vec3b> (i, J) [0] + weight*, with change in distance Image2overlap.at<vec3b> (i, J) [0];image1overlap.at<vec3b> (I, J) [1] = (1-weight) *image1roicopy.at< Vec3b> (i, J) [1] + Weight*image2overlap.at<vec3b> (i, J) [1];image1overlap.at<vec3b> (I, J) [2] = (1-weight) *image1ROICopy.at <Vec3b> (i, J) [2] + weight*image2overlap.at<vec3b> (i, J) [2];}}  Mat Roimat = Image02 (Rect (Point (image1overlap.cols, 0), point (Image02.cols, image02.rows))); The non-coincident part of Figure 2 Roimat.copyto (Mat (ImageTransform1, Rect (targetlinkpoint.x, 0, Roimat.cols, image02.rows))); The non-coincident parts are directly connected to the Namedwindow ("stitching results", 0); Imshow ("Stitching results", ImageTransform1) imwrite ("d:\\ stitching results. jpg", imageTransform1); Waitkey (); return 0;} Calculates the original image point position point2f gettransformpoint (const point2f originalpoint, const Mat &transformmaxtri) on the target image after the matrix transformation Mat originelp, Targetp;originelp = (mat_<double> (3, 1) << originalpoint.x, ORIGINALPOINT.Y, 1.0); TARGETP = tr Ansformmaxtri*originelp;float x = targetp.at<double> (0, 0)/targetp.at<double> (2, 0); float y = targetP.at& Lt;double> (1, 0)/targetp.at<double> (2, 0); return point2f (x, y);}


in OpenMP , the for guidance instruction is used for the task assignment of iterative computations, and the sections guidance instruction is used for task assignment of non-iterative computations, each #pragma omp The section statement directs a thread.

In the above program is equivalent to two threads running two images of feature extraction and descriptive narrative operation. The average time spent using OpenMP is 2.5S, and the speed is almost the same.


OpenMP parallel Programming Application-accelerating OPENCV image stitching algorithm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.