GCD usage summary for iOS development

Source: Internet
Author: User

GCD usage summary for iOS development
GCD is an underlying multi-threaded mechanism of iOS. Today, we will summarize the common APIs and concepts of GCD and hope to help you learn them. The concept of GCD queue is in multi-thread development. Programmers only need to define what they want to do and append it to DispatchQueue (dispatch Queue. There are two types of dispatch Queues: SerialDispatchQueue and ConcurrentDispatchQueue ). A task is a block. For example, the Code for adding a task to a queue is: 1 dispatch_async (queue, block). When multiple tasks are added to a queue, if the queue is a serial queue, they are executed one by one in sequence, and only one task is processed at the same time. When the queue is a parallel queue, the following tasks can be executed immediately regardless of whether the first task is finished or not, that is, multiple tasks can be executed simultaneously. However, the number of parallel tasks depends on the XNU kernel, Which is uncontrollable. For example, if you execute 10 tasks at the same time, the 10 tasks do not enable 10 threads. the threads will be reused Based on the task execution conditions and controlled by the system. The get queue system provides two queues: MainDispatchQueue and GlobalDispatchQueue. The former inserts the task into the RunLoop of the main thread for execution, so it is obviously a serial queue and we can use it to update the UI. The latter is a global parallel queue with four priorities: High, default, low, and backend. They are obtained as follows: 1 dispatch_queue_t queue = queue (); 2 3 dispatch queue_t queue = queue (queue, 0) execute asynchronous Task 1 dispatch_queue_t queue = dispatch_get_global_queue ); 2 dispatch_async (queue, ^ {3 //... 4}); this code snippet executes a task block directly in the Child thread. The GCD method is used to start a task immediately. It can not be started manually as in the Operation queue. The disadvantage is that it is not controllable. Make the task only run 1 + (id) Wait instance {2 static dispatch_once_t onceToken; 3 dispatch_once (& onceToken, ^ {4 _ blank instance = [[self alloc] init]; 5}); 6} This method of execution only once and thread security often appears in the singleton constructor. Sometimes, we want multiple tasks to be executed at the same time (in multiple threads), and after all the tasks are completed, we can execute other tasks, so we can create a group, let multiple tasks form a group. The following code executes subsequent tasks after multiple tasks in the group are completed: Copy code 1 dispatch_queue_t queue = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0 ); 2 dispatch_group_t group = dispatch_group_create (); 3 4 dispatch_group_async (group, queue, ^ {NSLog (@ "1") ;}); 5 dispatch_group_async (group, queue, ^ {NSLog (@ "2") ;}); 6 dispatch_group_async (group, queue, ^ {NSLog (@ "3") ;}); 7 dispatch_group_async (group, queue, ^ {NSLog (@ "4") ;}); 8 dispatch_group_async (group, queue, ^ {NSLog (@ "5") ;}); 9 10 dispatch_group_notify (group, dispatch_get_main_queue (), ^ {NSLog (@ "done ");}); copy code delayed execution Task 1 dispatch_after (dispatch_time (DISPATCH_TIME_NOW, (int64_t) (10 * NSEC_PER_SEC), dispatch_get_main_queue (), ^ {2 //... 3}); this code will insert the task into the RunLoop in 10 seconds. Dispatch_asycn and dispatch_sync have previously had an example of using dispatch_async to execute an asynchronous task. Let's look at the following code: Copy code 1 dispatch_queue_t queue = queue (queue, 0); 2 3 dispatch_async, ^ {4 NSLog (@ "1"); 5}); 6 7 NSLog (@ "2"); copy the code to obtain the global queue first, that is, the task in dispatch_async is lost to another thread for execution. The meaning of async here is that when the current thread assigns a block task to the subthread, the current thread will immediately execute, there is no blocking, that is, asynchronous. Then, the output result is either 12 or 21, because we cannot control how the two threads run in the RunLoop. Similarly, there is also a "synchronous" method dispatch_sync. The Code is as follows: copy the Code 1 dispatch_queue_t queue = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); 2 3 dispatch, ^ {4 NSLog (@ "1"); 5}); 6 7 NSLog (@ "2"); copy the code, which means, after the main thread assigns the task to the sub-thread, the main thread will wait until the sub-thread completes execution and then continue to execute its own content. The result is obviously 12. Note that the global queue is used here. What if the dispatch_sync queue is changed to the master thread queue? 1 dispatch_queue_t queue = dispatch_get_main_queue (); 2 dispatch_sync (queue, ^ {3 NSLog (@ "1"); 4}); this code will cause a deadlock because: 1. after the main thread passes the block to the main queue through dispatch_sync, it will wait for the task in the block to end and then go down to its own task. 2. the queue is first-in-first-out, and the tasks in the block are waiting for the main team to execute all the tasks before the queue is in the queue. This kind of loop wait forms a deadlock. Therefore, using dispatch_sync in the main thread to add tasks to the main queue is not advisable. To create a queue, you can use the functions provided by the system to obtain the master serial queue and global parallel queue. You can also manually create a serial and parallel queue by yourself. The Code is as follows: 1 dispatch_queue_t mySerialDispatchQueue = dispatch_queue_create ("com. steak. GCD ", DISPATCH_QUEUE_SERIAL); 2 dispatch_queue_t myConcurrentDispatchQueue = dispatch_queue_create (" com. steak. GCD ", DISPATCH_QUEUE_CONCURRENT); in MRC, the manually created queue is the 1 dispatch_release (myConcurrentDispatchQueue) to be released; the manually created queue is equivalent to the default priority global queue priority, to modify the queue priority, you must: 1 dispatch_queue_t MyConcurrentDispatchQueue = dispatch_queue_create ("com. steak. GCD ", DISPATCH_QUEUE_CONCURRENT); 2 dispatch_queue_t targetQueue = Queue (queue, 0); 3 Queue (myConcurrentDispatchQueue, targetQueue); The above code changes the queue priority to the background level, this is equivalent to the global queue with the Default background priority. After multiple block tasks are added to the serial Queue (SerialDispatchQueue), only one block can be executed at a time. If n Serial queues are generated, when tasks are added to each queue, the system starts n threads to execute these tasks at the same time. The correct time to use a serial queue is to use it when data/file competition needs to be resolved. For example, we can allow multiple tasks to access a piece of data at the same time, which may cause conflicts. We can also add each operation to a serial queue, because the serial queue can only execute one thread at a time, there is no conflict. But considering that the serial queue will slow down the system performance due to context switching, we still expect to adopt parallel queues. Let's look at the following sample code: Copy code 1 dispatch_queue_t queue = dispatch_get_global_queue (dispatch_queue_priority, 0); 2 dispatch_async (queue, ^ {3 // data read 4}); 5 dispatch_async (queue, ^ {6 // data read 2 7 }); 8 dispatch_async (queue, ^ {9 // write data 10}); 11 dispatch_async (queue, ^ {12 // read data 313}); 14 dispatch_async (queue, ^ {15 // data read 416}); apparently, the execution sequence of these five operations is unexpected and we want to execute them in read 1 and read 2. After writing, read 3 and read 4. To achieve this, you can use another gcd api: 1 dispatch_barrier_async (queue, ^ {2 // data write 3}). This ensures the concurrent Security of write operations. For parallel operations without data competition, you can use parallel Queue (CONCURRENT. JOIN behavior CGD uses dispatch_group_wait to implement join behavior for multiple operations. The Code is as follows: Copy code 1 dispatch_queue_t queue = queue (queue, 0); 2 dispatch_group_t group = dispatch_group_create (); 3 4 dispatch_group_async (group, queue, ^ {5 sleep (0.5); 6 NSLog (@ "1"); 7}); 8 dispatch_group_async (group, queue, ^ {9 sleep (1.5); 10 NSLog (@ "2"); 11}); 12 dispatch_group_async (group, queue, ^ {13 sleep (2. 5); 14 NSLog (@ "3"); 15}); 16 17 NSLog (@ "aaaaa"); 18 19 dispatch_time_t time = dispatch_time (DISPATCH_TIME_NOW, 2ull * NSEC_PER_SEC ); 20 if (dispatch_group_wait (group, time) = 0) {21 NSLog (@ "all executed "); 22} 23 else {24 NSLog (@ "no execution completed"); 25} 26 27 NSLog (@ "bbbbb "); copy the code. Three asynchronous threads are put in one group, and a timeout value (2 seconds) is created through dispatch_time_t. After the program is run, aaaaa is output immediately, this is output by the main thread. When dispatch_group_wait is encountered, the main thread will be suspended for 2 seconds. During the waiting process, the sub-thread will output 1 and 2 seconds respectively. When it reaches, the main thread finds that the tasks in the group are not all completed, and then outputs bbbbb. Here, if the timeout time is set to a long time (for example, 5 seconds), The bbbbb will be output immediately after the third task ends in 2.5 seconds, that is, when all tasks in the group are completed, the main thread is no longer blocked. If you want to wait permanently, you can set the time To DISPATCH_TIME_FOREVER. Parallel Loops are similar to C #'s PLINQ. OC can also execute loops in parallel. In GCD, there is a dispatch_apply function: 1 dispatch_queue_t queue = dispatch_get_global_queue (bytes, 0); 2 dispatch_apply, queue, ^ (size_t I) {3 NSLog (@ "% lu", I); 4}); this code makes I loop 20 times in parallel, if an array is processed internally, the parallel loop of the array can be implemented, and its internal synchronization operation is dispatch_sync. Therefore, during the execution of this loop, the current thread is blocked.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.