Learning opencv -- OpenMP

Source: Internet
Author: User

From: http://www.cnblogs.com/yangyangcv/archive/2012/03/23/2413335.html

 

One-point OpenMP experience

Recently I am reading multi-core programming. To put it simply, because the computer CPU usually has two cores, the 4-core and 8-core CPUs have gradually entered the ordinary home, the traditional single-Thread Programming Method is difficult to use the powerful functions of multi-core CPU, so multi-core programming came into being. According to my understanding, multi-core programming can be considered as a certain degree of abstraction for multi-threaded programming. It provides some simple APIs, so that users do not have to spend too much effort to understand the underlying knowledge of multithreading, this improves programming efficiency. The multi-core programming tools that have been focused on these two days include OpenMP and TBB. According to the current discussion on the internet, TBB needs to overwrite OpenMP. For example, opencv used OpenMP in the past, but since Version 2.3, it has abandoned OpenMP and switched to TBB. However, TBB is still complicated. In contrast, OpenMP is very easy to use. Due to limited energy and time, you cannot spend too much time learning TBB. Here we will share some of the OpenMP knowledge you have learned over the past two days and discuss it with you.

OpenMP supports C, C ++, and FORTRAN programming languages. OpenMP compilers include Sun Studio, Intel compiler, Microsoft Visual Studio, and GCC. I am using Microsoft Visual Studio 2008 and the CPU is Intel I5 quad core. First, let's talk about the OpenMP configuration on Microsoft Visual Studio 2008. Very simple. There are two steps in total:

(1) create a project. I will not talk more about this.

(2) After creating a project, click "project"> "properties" in the menu bar and choose "configuration properties"> "C/C ++"> "language"> "OpenMP support, select Yes from the drop-down menu.

Now the configuration is complete. Below is a small example to illustrate the ease of use of OpenMP. In this example, there is a simple test () function, and then in main (), use a for loop to run the test () function eight times.

 1 #include <iostream> 2 #include <time.h> 3 void test() 4 { 5     int a = 0; 6     for (int i=0;i<100000000;i++) 7         a++; 8 } 9 int main()10 {11     clock_t t1 = clock();12     for (int i=0;i<8;i++)13         test();14     clock_t t2 = clock();15     std::cout<<"time: "<<t2-t1<<std::endl;16 }

After compilation and running, the printed time is 1.971 seconds. Next we will use one sentence to convert the above Code into multi-core running.

 1 #include <iostream> 2 #include <time.h> 3 void test() 4 { 5     int a = 0; 6     for (int i=0;i<100000000;i++) 7         a++; 8 } 9 int main()10 {11     clock_t t1 = clock();12     #pragma omp parallel for13     for (int i=0;i<8;i++)14         test();15     clock_t t2 = clock();16     std::cout<<"time: "<<t2-t1<<std::endl;17 }

After compilation and running, the printed time is 0.546 seconds, which is almost 1/4 of the above time.

We can see that OpenMP is easy to use. In the above Code, we do not have any additional include header files, and no additional link library files. We just added the # pragma OMP parallel for statement before the for loop. In addition, this code can be compiled on a single-core machine, or on a machine where the compiler does not set OpenMP to yes. It will automatically ignore this line of code # pragma, then compile and run it in the traditional single-core serial mode! The only step we need to do is from c: \ Program
Files \ Microsoft Visual Studio 9.0 \ Vc \ redist \ x86 \ Microsoft. vc90.openmp and c: \ Program Files \ Microsoft Visual Studio 9.0 \ Vc \ redist \ debug_nonredist \ x86 \ Microsoft. copy vcomp90d in the vc90.debugopenmp directory respectively. DLL and vcomp90.dll files to the current directory of the project file.

Make a simple analysis of the above Code according to my understanding.

When the compiler finds # pragma OMP parallel for, it automatically divides the following for loop into N parts (n is the number of computer CPU cores), and then assigns each part to one core for execution, parallel Execution is performed between multiple cores. The following code verifies this analysis.

Press Ctrl + C to copy the code

# Include <iostream> <br/> int main () <br/> {<br/> # pragma OMP parallel for <br/> for (INT I = 0; I <10; I ++) <br/> STD: cout <I <STD: Endl; <br/> return 0; <br/>}Press Ctrl + C to copy the code

The console prints 0 3 4 5 8 9 6 7 1 2. Note: because each core is executed in parallel, the order of output during each execution may be different.

Next we will talk about race condition, which is the most tricky issue in all multi-threaded programming. This problem can be expressed as: when multiple threads execute in parallel, multiple threads may simultaneously perform read/write operations on a variable, resulting in unpredictable results. For example, in the following example, for Array a that contains 10 integer elements, we use the for loop to calculate the sum of its elements and save the result in the sum variable.

Press Ctrl + C to copy the code

# Include <iostream> <br/> int main () <br/> {<br/> int sum = 0; <br/> int A [10] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };< br/> # pragma OMP parallel for <br/> for (INT I = 0; I <10; I ++) <br/> sum = sum + A [I]; <br/> STD: cout <"sum:" <sum <STD: Endl; <br/> return 0; <br/>}Press Ctrl + C to copy the code

If we comment out # pragma OMP parallel for, let the program first run in the traditional serial mode, obviously, sum = 55. However, after execution in parallel mode, sum is changed to another value. For example, in a running process, sum = 49. The reason is that when a thread a executes sum = sum + A [I], another thread B is updating sum, and A is still accumulating with the old sum, so an error occurs.

So how does OpenMP achieve Parallel Array summation? Next we will provide a basic solution. The idea of this solution is to first generate an array sumarray whose length is the number of threads for parallel execution (by default, this number is equal to the number of CPU cores). In the for loop, let each thread update the elements in the sumarray corresponding to its own thread, and then accumulate the elements in the sumarray into sum. The Code is as follows:

Press Ctrl + C to copy the code

# Include <iostream> <br/> # include <OMP. h> <br/> int main () {<br/> int sum = 0; <br/> int A [10] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; <br/> int corenum = omp_get_num_procs (); // obtain the number of processors <br/> int * sumarray = new int [corenum]; // Number of processors, <br/> for (INT I = 0; I <corenum; I ++) // initialize each element of the array to 0 <br/> sumarray [I] = 0; <br/> # pragma OMP parallel for <br/> for (INT I = 0; I <10; I ++) <br/>{< br/> int K = omp_get_thread_num (); // obtain the ID of each thread <br/> sumarray [k] = sumarray [k] + A [I]; <br/>}< br/> for (INT I = 0; I <corenum; I ++) <br/> sum = sum + sumarray [I]; <br/> STD: cout <"sum:" <sum <STD: Endl; <br/> return 0; <br/>}Press Ctrl + C to copy the code

Note that in the code above, we use the omp_get_num_procs () function to obtain the number of processors and use the omp_get_thread_num () function to obtain the ID of each thread. to use these two functions, we need to include <OMP. h>.

Although the above Code achieves the goal, it produces a lot of additional operations, for example, to form an array sumarray, and then use a for loop to accumulate all its elements, is there a simpler way? The answer is: OpenMP provides us with another tool, reduction. For details, see the following code:

 1 #include <iostream> 2 int main(){ 3     int sum = 0; 4     int a[10] = {1,2,3,4,5,6,7,8,9,10}; 5 #pragma omp parallel for reduction(+:sum) 6     for (int i=0;i<10;i++) 7         sum = sum + a[i]; 8     std::cout<<"sum: "<<sum<<std::endl; 9     return 0;10 }

In the above Code, we add the direction (+: Sum) after # pragma OMP parallel for, which means to tell the compiler: The for loop below you want to run in multiple threads, however, each thread needs to save the copy of the sum variable. After the loop ends, all threads accumulate their own sum as the final output.

Function is convenient, but it only supports some basic operations, such as +,-, *, &, |, &, |. In some cases, we need to avoid race condition, but the operations involved are beyond the limit function. What should we do? This requires another OpenMP tool, critical. Let's take a look at the following example. In this example, we calculate the maximum value of array A and save the result in Max.

 1 #include <iostream> 2 int main(){ 3     int max = 0; 4     int a[10] = {11,2,33,49,113,20,321,250,689,16}; 5 #pragma omp parallel for 6     for (int i=0;i<10;i++) 7     { 8         int temp = a[i]; 9 #pragma omp critical10         {11             if (temp > max)12                 max = temp;13         }14     }15     std::cout<<"max: "<<max<<std::endl;16     return 0;17 }

In the above example, The for loop is automatically divided into N parts for parallel execution, but we use # pragma OMP critical to include if (temp> MAX) max = temp, it means that each thread executes the statements in for in parallel, but when you execute the statements in critical, be sure that there are other threads executing the statements in it. If yes, wait until the execution of other threads is completed before the execution. This avoids the race condition problem, but it is obvious that its execution speed will decrease because there may be thread waits.
With the above basic knowledge, it is enough for me to do many things. Next we will look at a specific application example. We will read two images from the hard disk, extract the feature points from the two images, match the feature points, and finally draw the image and the matching feature points. Understanding this example requires some basic knowledge of image processing, which I will not detail here. In addition, compiling this example requires opencv. the version I used is 2.3.1, and the installation and configuration of opencv are not described here. First, let's look at the traditional serial programming method.

Press Ctrl + C to copy the code

# Include "opencv2/highgui. HPP "<br/> # include" opencv2/features2d/features2d. HPP "<br/> # include <iostream> <br/> # include <OMP. h> <br/> int main () {<br/> CV: surffeaturedetector detector (400); <br/> CV: surfdescriptorextractor extractor; <br/> CV:: bruteforcematcher <CV: L2 <float> matcher; <br/> STD: vector <CV: dmatch> matches; <br/> CV: mat im0, im1; <br/> STD: vector <CV: keypoint> keypoints0, keypoints1; <br/> CV: mat descriptors0, descriptors1; <br/> double T1 = omp_get_wtime (); <br/> // process the first image first <br/> im0 = CV: imread ("rgb0.jpg", cv_load_image_grayscale ); <br/> detector. detect (im0, keypoints0); <br/> extractor. compute (im0, keypoints0, descriptors0); <br/> STD: cout <"find" <keypoints0.size () <"keypoints in im0" <STD :: endl; <br/> // re-process the second image <br/> im1 = CV: imread ("rgb1.jpg", cv_load_image_grayscale); <br/> detector. detect (im1, keypoints1); <br/> extractor. compute (im1, keypoints1, descriptors1); <br/> STD: cout <"find" <keypoints1.size () <"keypoints in im1" <STD :: endl; <br/> double t2 = omp_get_wtime (); <br/> STD: cout <"time:" <t2-t1 <STD: Endl; <br/> matcher. match (descriptors0, descriptors1, matches); <br/> CV: mat img_matches; <br/> CV: drawmatches (im0, keypoints0, im1, keypoints1, matches, img_matches); <br/> CV: namedwindow ("matches", cv_window_autosize); <br/> CV: imshow ("matches", img_matches ); <br/> CV: waitkey (0); <br/> return 1; <br/>}Press Ctrl + C to copy the code

Obviously, read the image and extract the feature points and feature descriptions in parallel. The modifications are as follows:

Press Ctrl + C to copy the code

# Include "opencv2/highgui. HPP "<br/> # include" opencv2/features2d/features2d. HPP "<br/> # include <iostream> <br/> # include <vector> <br/> # include <OMP. h> <br/> int main () {<br/> int imnum = 2; <br/> STD: vector <CV: mat> imvec (imnum ); <br/> STD: vector <CV: keypoint> keypointvec (imnum); <br/> STD: vector <CV :: mat> descriptorsvec (imnum); <br/> CV: surffeaturedetector detector (400); CV: surfdescriptorextractor extractor; <br/> CV: bruteforcematcher <CV :: l2 <float> matcher; <br/> STD: vector <CV: dmatch> matches; <br/> char filename [100]; <br/> double T1 = omp_get_wtime (); <br/> # pragma OMP parallel for <br/> for (INT I = 0; I <imnum; I ++) {<br/> sprintf (filename, "rgb1_d.jpg", I); <br/> imvec [I] = CV: imread (filename, cv_load_image_grayscale ); <br/> detector. detect (imvec [I], keypointvec [I]); <br/> extractor. compute (imvec [I], keypointvec [I], descriptorsvec [I]); <br/> STD: cout <"find" <keypointvec [I]. size () <"keypoints in im" <I <STD: Endl; <br/>}< br/> double t2 = omp_get_wtime (); <br/> STD: cout <"time:" <t2-t1 <STD: Endl; <br/> matcher. match (descriptorsvec [0], descriptorsvec [1], matches); <br/> CV: mat img_matches; <br/> CV: drawmatches (imvec [0], keypointvec [0], imvec [1], keypointvec [1], matches, img_matches); <br/> CV: namedwindow ("matches", cv_window_autosize ); <br/> CV: imshow ("matches", img_matches); <br/> CV: waitkey (0); <br/> return 1; <br/>}Press Ctrl + C to copy the code

Compare the two execution modes, with the time being 2.343 seconds v. S. 1.2441 seconds

In the code above, we used STL vector to store two images, feature points, and feature descriptors to adapt to # pragma OMP parallel for execution, but in some cases, variables may not fit in the vector. What should I do? This requires another OpenMP tool, Section. The Code is as follows:

Press Ctrl + C to copy the code

# Include "opencv2/highgui. HPP "<br/> # include" opencv2/features2d/features2d. HPP "<br/> # include <iostream> <br/> # include <OMP. h> <br/> int main () {<br/> CV: surffeaturedetector detector (400); CV: surfdescriptorextractor extractor; <br/> CV :: bruteforcematcher <CV: L2 <float> matcher; <br/> STD: vector <CV: dmatch> matches; <br/> CV: mat im0, im1; <br/> STD: vector <CV: keypoint> keypoints0, keypoints1; <br/> CV: mat descriptors0, descriptors1; <br/> double T1 = omp_get_wtime (); <br/> # pragma OMP parallel sections <br/> {<br/> # pragma OMP section <br/>{< br/> STD :: cout <"processing im0" <STD: Endl; <br/> im0 = CV: imread ("rgb0.jpg", cv_load_image_grayscale); <br/> detector. detect (im0, keypoints0); <br/> extractor. compute (im0, keypoints0, descriptors0); <br/> STD: cout <"find" <keypoints0.size () <"keypoints in im0" <STD :: endl; <br/>}< br/> # pragma OMP section <br/>{< br/> STD: cout <"processing im1" <STD :: endl; <br/> im1 = CV: imread ("rgb1.jpg", cv_load_image_grayscale); <br/> detector. detect (im1, keypoints1); <br/> extractor. compute (im1, keypoints1, descriptors1); <br/> STD: cout <"find" <keypoints1.size () <"keypoints in im1" <STD :: endl; <br/>}< br/> double t2 = omp_get_wtime (); <br/> STD: cout <"Time: "<t2-t1 <STD: Endl; <br/> matcher. match (descriptors0, descriptors1, matches); <br/> CV: mat img_matches; <br/> CV: drawmatches (im0, keypoints0, im1, keypoints1, matches, img_matches); <br/> CV: namedwindow ("matches", cv_window_autosize); <br/> CV: imshow ("matches", img_matches ); <br/> CV: waitkey (0); <br/> return 1; <br/>}Press Ctrl + C to copy the code

In the above Code, we first use # pragma OMP parallel sections to include the content to be executed in parallel. In it, we use two # pragma OMP sections, each part contains image reading, feature points, and feature description sub-extraction. It is simplified to pseudo-code:

 1 #pragma omp parallel sections 2 { 3     #pragma omp section 4     { 5         function1(); 6     } 7   #pragma omp section 8     { 9         function2();10     }11 }

It means that the content in parallel sections must be executed in parallel. In terms of division of labor, each thread executes one of the sections. If the number of sections is greater than the number of threads, after a thread executes its section, it will continue to execute the remaining section. In terms of time, this method is similar to the way for humans to construct a for loop using a vector, but it is undoubtedly more convenient, and on a single-core machine or the OpenMP compiler is not enabled, this method can be correctly compiled without any changes and executed in Single-core serial mode.

I have shared my experience on OpenMP over the past two days. It is inevitable that there will be errors. please correct me. Another question is that we often use private, shared, and so on to modify variables in various OpenMP tutorials. I understand the meanings and functions of these modifiers, but in all my examples above, without these modifiers, it does not seem to affect the running results. I don't know what to do here.

In the process of writing the above, I have referred to the resources in multiple places including the following two URLs and will not list them any more. I would like to express my gratitude here.

Http://blog.csdn.net/drzhouweiming/article/details/4093624

Http://software.intel.com/zh-cn/articles/more-work-sharing-with-openmp

 

I have a dual-core architecture, and a few surf detection programs later in the experiment have not improved much. I think this kind of program may require a lot of threads, and the parallel improvement is not obvious ~

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.