C ++ implements a multi-thread synchronization mode of collaborative working program example, working program example

Source: Internet
Author: User

C ++ implements a multi-thread synchronization mode of collaborative working program example, working program example

Multi-threaded concurrent programs and collaborative programs are actually different concepts. Multi-thread concurrency means that multiple execution sequences run at the same time, while the collaborative program is that multiple execution sequences collaborate with each other, and there is only one execution sequence at the same time. Today I think of a combination of the two. Taking real-life examples as an example, if a class has 100 students and a teacher wants to correct the assignments of 100 students, sometimes the teacher is too busy or is in a hurry to ask a few students to help with the correction. After all the students finish the correction, they will be handed over to the teacher, the teacher sends the homework to the students in the class at the next time .... In fact, this idea and mode can also be used for reference in concurrent programming. In particular, concurrency and collaboration often occur during network server development, so today, I wrote a simple program to simulate this situation. Of course, this program does not have any meaning, but only records this idea. I have always felt that the program is under development, the idea is the most important. What language is used for implementation is just a different manifestation. Today we record it and expand it according to the project needs in the future development process, where appropriate!

1 // ---------------------------------------------------------------- 2 development tools: visual Studio 2012 3 // defaults 4 // C ++ 5 # include <iostream> 6 # include <memory> 7 # include <thread> 8 # include <mutex> 9 # include <condition_variable> 10 # include <queue> 11 # include <vector> 12 13 using namespace std; 14 15 // windows 16 # include <windows. h> 17 18 19 /* **************************************** * ****** 20 [Example] implement a multi-threaded collaborative working program 21 22 when a thread (relative to the main thread) at the time of completing a task, sometimes, to improve efficiency, You can take full advantage of the 24-core CPU to divide the tasks in your hands into multiple parts, distribution to 25 idle auxiliary threads for processing, and the main thread must wait for all 26 of the auxiliary threads to complete the processing and then perform a 27 Summary of all the tasks before proceeding to the next step, in this case, a synchronized 28-thread collaborative work class is required. 29 *************************************** * ********/30 31 32 // defines a 33 class CSumTask 34 {35 public: 36 CSumTask (double dStart, double dEnd); 37 ~ CSumTask (); 38 double DoTask (); 39 double GetResult (); 40 private: 41 double m_dMin; 42 double m_dMax; 43 double m_dResult; 44}; 45 46 CSumTask :: CSumTask (double dStart, double dEnd): m_dMin (dStart), m_dMax (dEnd), m_dResult (0.0) 47 {48 49} 50 CSumTask ::~ CSumTask () 51 {52 53} 54 double CSumTask: DoTask () 55 {56 57 for (double dNum = m_dMin; dNum <= m_dMax; ++ dNum) 58 {59 m_dResult + = dNum; 60} 61 return m_dResult; 62} 63 64 double CSumTask: GetResult () 65 {66 return m_dResult; 67} 68 69 70 // define a task manager 71 class CTaskManager 72 {73 public: 74 CTaskManager (); 75 ~ CTaskManager (); 76 size_t Size (); 77 void AddTask (const std: shared_ptr <CSumTask> TaskPtr); 78 std: shared_ptr <CSumTask> PopTask (); 79 protected: 80 std: queue <std: shared_ptr <CSumTask> m_queTask; 81}; 82 83 CTaskManager: CTaskManager () 84 {85 86} 87 88 CTaskManager ::~ CTaskManager () 89 {90 91} 92 93 size_t CTaskManager: Size () 94 {95 return m_queTask.size (); 96} 97 98 void CTaskManager: AddTask (const std :: shared_ptr <CSumTask> TaskPtr) 99 {100 m_queTask.push (std: move (TaskPtr); 101} 102 103 std: shared_ptr <CSumTask> CTaskManager: PopTask () 104 {105 std: shared_ptr <CSumTask> tmPtr = m_queTask.front (); 106 m_queTask.pop (); 107 return tmPtr; 108} 109 110 111 // collaborative working line Process Management class, responsible for creating collaborative working threads and accepting tasks entrusted by autonomous threads to process 112 class CWorkThreadManager113 {114 public: 115 CWorkThreadManager (unsigned int uiThreadSum); 116 ~ CWorkThreadManager (); 117 bool AcceptTask (std: shared_ptr <CSumTask> TaskPtr); 118 bool StopAll (bool bStop); 119 unsigned int ThreadNum (); 120 protected: 121 std :: queue <std: shared_ptr <CSumTask> m_queTask; 122 std: mutex m_muTask; 123 int m_iWorkingThread; 124 int m_iWorkThreadSum; 125 std: vector <std: shared_ptr <std:: thread> m_vecWorkers; 126 127 void WorkThread (int iWorkerID); 128 bool m_bStop; 129 std: con Dition_variable_any m_condPop; 130 std: condition_variable_any m_stopVar; 131}; 132 133 CWorkThreadManager ::~ CWorkThreadManager () 134 {135 136} 137 unsigned int CWorkThreadManager: ThreadNum () 138 {139 return m_iWorkThreadSum; 140} 141 142 CWorkThreadManager: CWorkThreadManager (unsigned int uiThreadSum ): m_bStop (false), m_iWorkingThread (0), m_iWorkThreadSum (uiThreadSum) 143 {144 // create a working thread 145 for (int I = 0; I <m_iWorkThreadSum; ++ I) 146 {147 std: shared_ptr <std: thread> WorkPtr (new std: thread (& CWorkThreadManager:: WorkThread, this, I + 1); 148 m_vecWorkers.push_back (WorkPtr); 149} 150 151} 152 bool CWorkThreadManager: AcceptTask (std: shared_ptr <CSumTask> TaskPtr) 154 {155 std: unique_lock <std: mutex> muLock (m_muTask); 156 if (m_iWorkingThread> = m_iWorkThreadSum) 157 {158 return false; // No Idle threads are available to Process Task 159} 160 m_queTask.push (TaskPtr); 161 m_condpop.policy_all (); 162 return true; 163} 164 165 void CWorkThreadMa Nager: WorkThread (int iWorkerID) 166 {167 while (! M_bStop) 168 {169 std: shared_ptr <CSumTask> TaskPtr; 170 bool bDoTask = false; 171 {172 std: unique_lock <std: mutex> muLock (m_muTask ); 173 while (m_queTask.empty ()&&! M_bStop) 174 {175 m_condPop.wait (m_muTask); 176} 177 if (! M_queTask.empty () 178 {179 TaskPtr = m_queTask.front (); 180 m_queTask.pop (); 181 m_iWorkingThread ++; 182 bDoTask = true; 183} 184 185} 186 // Process Task 187 if (bDoTask) 188 {189 TaskPtr-> DoTask (); 190 {191 std: unique_lock <std :: mutex> muLock (m_muTask); 192 m_iWorkingThread --; 193 cout <"> DoTask in thread [" <iWorkerID <"] \ n "; 194} 195} 196 m_stopvar.policy_all (); 197} 198} 199 200 bool CWorkThreadManager: StopAll (Bool bStop) 201 {202 {203 std: unique_lock <std: mutex> muLock (m_muTask); 204 while (m_queTask.size ()> 0 | m_iWorkingThread> 0) 205 {206 m_stopVar.wait (m_muTask); 207 cout <"> Waiting finish .... \ n "; 208} 209 cout <"> All task finished! \ N "; 210 211} 212 213 m_bStop = true; 214 m_condpop.policy_all (); 215 // wait until all threads close 216 for (std: vector <std: shared_ptr <std:: thread >>:: iterator itTask = m_vecWorkers.begin (); itTask! = M_vecWorkers.end (); ++ itTask) 217 {218 (* itTask)-> join (); 219} 220 return true; 221} 222 223 224 /********************************** * *** 225 [sample program description] 226 227 Each task object represents 1 + 2 + .... + 1000 of the accumulated products and 228 of the current 2000 such tasks need to be calculated for each 229 tasks, and then all the results are summarized and summed. 230 multi-threaded collaborative work objects are used to compute the results of every 231 tasks. The main thread sums all results after 232 of all results are completed. 233 *************************************** */234 235 236 int main (int arg, char * arv []) 237 {238 239 std: cout. sync_with_stdio (true); 240 CTaskManager TaskMgr; 241 CWorkThreadManager WorkerMgr (5); 242 std: vector <std: shared_ptr <CSumTask> vecResultTask; 243 244 for (int I = 0; I <2000; ++ I) 245 {246 std: shared_ptr <CSumTask> TaskPtr (new CSumTask (1.0, 1000.0 )); 247 TaskMgr. addTask (TaskPtr); 248 vecResultTask. Push_back (TaskPtr); 249} 250 251 // 252 DWORD dStartTime =: GetTickCount (); 253 while (TaskMgr. size ()> 0) 254 {255 std: shared_ptr <CSumTask> WorkPtr = TaskMgr. popTask (); 256 if (! WorkerMgr. acceptTask (WorkPtr) 257 {258 // The auxiliary thread is busy at the moment (no idle help) and processes the task 259 WorkPtr-> DoTask () by yourself (); 260 cout <">>> DoTask in thread [0] \ n"; 261} 262 WorkerMgr. stopAll (true); // wait for all tasks to complete 264 265 // sum all results 266 double dSumResult = 0.0; 267 for (std: vector <std :: shared_ptr <CSumTask >:: iterator itTask = vecResultTask. begin (); itTask! = VecResultTask. end (); ++ itTask) 268 {269 dSumResult + = (* itTask)-> GetResult (); 270} 271 272 DWORD dEndTime =: GetTickCount (); 273 cout <"\ n [Status]" <endl; 274 cout <"\ tEvery task result:" <vecResultTask [0]-> GetResult () <endl; 275 cout <"\ tTask num:" <vecResultTask. size () <endl; 276 cout <"\ tAll result sum:" <dSumResult; 277 cout <"\ tCast to int, result: "<static_cast <long> (dSumResult) <endl; 278 cout <" \ tWorkthread num: "<WorkerMgr. threadNum () <endl; 279 cout <"\ tTime of used:" <dEndTime-dStartTime <"ms" <endl; 280 getchar (); 281 return 0; 282}

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.