Understanding and practice of GCD, and GCD understanding and practice

Source: Internet
Author: User

Understanding and practice of GCD, and GCD understanding and practice
GCD

GCD, full Grand Central Dispatch, is a solution developed by Apple for multi-core parallel processing. It is implemented in C language, but it is very convenient to use because block is used to process callback. In addition, GCD automatically manages the thread lifecycle, which does not need to be managed.

Tasks and queues

GCD has two important concepts: task and queue.

1. tasks are what we want to handle. tasks can be divided into synchronous execution and asynchronous execution:

Sync: Use dispatch_sync (dispatch_queue_t queue, dispatch_block_t block) to create a synchronization task. When the synchronization task is executed, the current thread will be blocked and the result will be returned after the block is executed. Then the current thread will continue to execute.

Asynchronous (async): Use dispatch_async (dispatch_queue_t queue, dispatch_block_t block) to create an asynchronous task. The asynchronous task does not block the current thread. After the task is created, it returns the thread for execution.

2. queue, store tasks, and distribute the tasks first in, first out, and then out. It can be divided into serial queue and parallel queue:

Serial queue: the tasks in the queue are executed first in, first out, and then after the previous task is completed, there is a strict execution order.

Parallel queue (Concurrent queue): the queue distributes tasks in sequence for parallel execution. Almost all tasks are executed together. However, GCD controls the number of parallel tasks based on system resources. Therefore, if there are many tasks, it does not allow all tasks to be executed simultaneously. There is a table that can be used in combination with tasks and queues:

  Serial queue Parallel queue
Synchronous task execution The current thread is executed one by one The current thread is executed one by one
Asynchronous task execution Separate threads and execute them one by one Multi-thread execution

The following is a demonstration of the use of tasks and queues:

  

/*** The tasks in the serial queue wait until the execution of the ongoing tasks ends. The tasks are queued for execution */dispatch_queue_t serial_queue = dispatch_queue_create ("serial. queue ", DISPATCH_QUEUE_SERIAL); // The Master column dispatch_queue_t mainQueue = dispatch_get_main_queue ();/*** in parallel, without waiting for the processing result of the ongoing task, multiple tasks can be concurrently executed */dispatch_queue_t concurrent_queue = dispatch_queue_create ("concurrent. queue ", DISPATCH_QUEUE_CONCURRENT); // global queue dispatch_queue_t globalQueue = queue (queue, 0); // create several synchronization tasks in the serial queue and execute dispatch_sync (serial_queue, ^ {NSLog (@ "task a, thread: % @", [NSThread currentThread]) ;}); dispatch_sync (serial_queue, ^ {NSLog (@ "task B, thread: % @ ", [NSThread currentThread]) ;}); dispatch_sync (serial_queue, ^ {NSLog (@" task c, thread: % @", [NSThread currentThread]) ;}); // create several asynchronous tasks in the serial queue and execute dispatch_sync (serial_queue, ^ {NSLog (@ "task aa, thread: % @ ", [NSThread currentThread]) ;}); dispatch_sync (serial_queue, ^ {NSLog (@" task bb, thread: % @", [NSThread currentThread]);}); dispatch_sync (serial_queue, ^ {NSLog (@ "task cc, thread: % @", [NSThread currentThread]);}); // create several asynchronous tasks in the serial queue and run dispatch_async (serial_queue, ^ {NSLog (@ "task 1, thread: % @", [NSThread currentThread]);}); dispatch_async (serial_queue, ^ {NSLog (@ "task 1, thread: % @", [NSThread currentThread]);}); dispatch_async (serial_queue, ^ {NSLog (@ "task 1, thread: % @", [NSThread currentThread]) ;}); // create several asynchronous tasks in the and queue, run dispatch_async (concurrent_queue, ^ {NSLog (@ "task 11, thread: % @", [NSThread currentThread]) ;}); dispatch_async (concurrent_queue, ^ {NSLog (@ "task 22, thread: % @", [NSThread currentThread]) ;}); dispatch_async (concurrent_queue, ^ {NSLog (@ "task 33, thread: % @ ", [NSThread currentThread]) ;});
Task Group and fence

Sometimes, when we want to add dependencies for multiple tasks, we can use the Task Group dispatch_group and dispatch_barrier.

A Task group puts several tasks in one group. These tasks can be in the same queue or in different queues, and then use dispatch_group_policy () and dispatch_group_wait () complete the tasks in the task group.

1. The tasks in dispatch_group_policy () will be executed after multiple tasks in the group are completed.

Dispatch_group_t group = dispatch_group_create ();/*** after all the task tasks in the group are executed, execute the task */dispatch_group_async (group, serial_queue, ^ {sleep (2 ); NSLog (@ "serial_queue1");}); dispatch_group_async (group, serial_queue, ^ {sleep (2); NSLog (@ "serial_queue2");}); dispatch_group_async (group, concurrent_queue, ^ {sleep (2); NSLog (@ "concurrent_queue1") ;}); dispatch_group_notify (group, dispatch_get_main_queue (), ^ {NSLog (@ "return main queue ");});

 

2. In dispatch_group_wait (), a given time can be input. If all group tasks are completed before the waiting time ends, 0 is returned; otherwise, non-0 is returned. This function is a synchronization task.

/*** Dispatch_group_wait specifies a time. If all group tasks are completed before the waiting time ends, 0 is returned; otherwise, non-0 is returned. This function is a synchronization task */dispatch_group_async (group, serial_queue, ^ {sleep (3); NSLog (@ "serial_queue1") ;}); dispatch_group_async (group, serial_queue, ^ {sleep (2 ); NSLog (@ "serial_queue2");}); dispatch_group_async (group, concurrent_queue, ^ {sleep (3); NSLog (@ "concurrent_queue1 ");}); long I = dispatch_group_wait (group, dispatch_time (DISPATCH_TIME_NOW, NSEC_PER_SEC * 6); NSLog (@ "-- % ld --", I); dispatch_group_async (group, concurrent_queue, ^ {NSLog (@ "finish all ");});

 

3. You can use dispatch_group_enter () and dispatch_group_leave () to add tasks in the task group. The two tasks must appear in pairs. The code between the two functions is the task to be added to the task.

/*** Use dispatch_group_enter and dispatch_group_leave to add group tasks. The two must appear in pairs */dispatch_group_enter (group); sleep (2); NSLog (@ "1 "); dispatch_group_leave (group); dispatch_group_enter (group); dispatch_async (concurrent_queue, ^ {sleep (3); NSLog (@ "2"); dispatch_group_leave (group );}); dispatch_group_enter (group); sleep (2); NSLog (@ "3"); dispatch_group_leave (group); long I = partition (group, dispatch_time (DISPATCH_TIME_NOW, NSEC_PER_SEC * 6 )); NSLog (@ "-- % ld --", I); dispatch_group_async (group, concurrent_queue, ^ {NSLog (@ "finish all ");});

 

A fence divides tasks in a queue into two parts. After all the tasks added before a fence task are executed, the Fence task is executed separately. After the Fence task is executed, the subsequent tasks are continued. The fence must be executed independently and cannot be executed concurrently with other tasks. Therefore, the fence is only meaningful to the concurrent queue. The fence will be executed independently only after all concurrent tasks in the current queue are executed. After it is executed, it will continue to be executed in the normal way.

dispatch_queue_t concurrent_queue = dispatch_queue_create("concurrent.queue", DISPATCH_QUEUE_CONCURRENT);        dispatch_async(concurrent_queue, ^{        sleep(2);        NSLog(@"1");    });    dispatch_async(concurrent_queue, ^{        sleep(2);        NSLog(@"2");    });    dispatch_async(concurrent_queue, ^{        sleep(2);        NSLog(@"3");    });        dispatch_barrier_async(concurrent_queue, ^{        sleep(2);        NSLog(@"barrier");    });    dispatch_async(concurrent_queue, ^{        sleep(2);        NSLog(@"finish1");    });        dispatch_async(concurrent_queue, ^{        NSLog(@"finish2");    });

 

Execute dispatchApply repeatedly and execute dispatch_once at a time.

Dispatch_apply () is to submit a task to the queue for repeated execution. Parallel or serial execution is determined by the queue. dispatch_apply will block the current thread and return the result after all tasks are completed.

dispatch_queue_t concurrent_queue = dispatch_queue_create("concurrent.queue", DISPATCH_QUEUE_CONCURRENT);    dispatch_apply(5, concurrent_queue, ^(size_t index) {        sleep(1);        NSLog(@"index:%zu",index);    });    NSLog(@"finish");

Dispatch_once () ensures that the code in the block is executed only once during the entire application

// Make sure that the code in the block is only executed once during the entire application operation. static dispatch_once_t onceToken; dispatch_once (& onceToken, ^ {NSLog (@ "just run once in application ");});
Thread synchronization, semaphores, and thread deadlock

Multi-threaded processing often needs to consider the issue of resource preemption, such as the classic ticket purchasing problem and database writing. There are many ways to solve the problem. The following describes the simplest two methods: Add a synchronization lock or use a semaphore to solve the problem.

Synchronization lock. When a thread accesses the resources in the lock, this part is locked to Deny Access From other threads. It is not unlocked until the occupied thread is released, allowing access to other resources.

// Synchronous lock: locks the code in the block. Only one thread is allowed to access @ synchronized (self) {NSLog (@ "lock ");};

Semaphore dispatch_semaphore, set the total signal at the beginning, and then use dispatch_semaphore_wait () and dispatch_semaphore_signal () to manage the semaphore to control thread access

// Semaphores dispatch_semaphore dispatch_group_t group = dispatch_group_create (); // you can specify the total semaphores such as dispatch_semaphore_create (10); for (int I = 0; I <100; I ++) {// set the waiting signal. If the semaphores are greater than 0 at this time, the semaphores are subtracted and executed. // If the semaphores are smaller than 0 at this time, they will wait, until the timeout // If the timeout value is zero, the system returns 0 records (semaphore, dispatch_time (DISPATCH_TIME_NOW, NSEC_PER_SEC * 50); dispatch_group_async (group, concurrent_queue, ^ {sleep ); NSLog (@ "% d", I); // sends a signal to add dispatch_semaphore_signal (semaphore) ;});} dispatch_group_notify (group, concurrent_queue, ^ {NSLog (@ "finish ");});

  

Although GCD is very convenient to use, improper use may cause some trouble. The following lists some cases that may cause thread deadlocks:

// In a parallel queue, dispatch_sync is called in the current queue and passed in to the current queue for execution. This does not cause deadlock. Dispatch_sync will block the current thread, but because the queue is executed in parallel, the tasks in the block will be executed immediately and then returned. -(Void) syncAndConcurrentQueue {dispatch_queue_t queue = dispatch_queue_create ("concurrent. queue", queue); dispatch_sync (queue, ^ {NSLog (@ "Jump to concurrent. queue!, Thread: % @ ", [NSThread currentThread]); dispatch_sync (queue, ^ {sleep (3); NSLog (@" success6, thread: % @", [NSThread currentThread]);}); NSLog (@ "return");} // In the serial queue, call dispatch_sync in the current queue and pass it to the current queue for execution, this will cause deadlock. Dispatch_sync will block the current thread and wait until the tasks in the block are executed. However, since the queue is executed serially and the tasks in the block are placed at the end, there is no chance to execute them, thread deadlock-(void) synAndSerialQueue {dispatch_queue_t queue = dispatch_queue_create ("serial. queue ", DISPATCH_QUEUE_SERIAL); dispatch_async (queue, ^ {NSLog (@" Jump to serial. queue! "); Dispatch_sync (queue, ^ {NSLog (@" success ") ;}); NSLog (@" return ");});} // Task 1 blocks the main thread and returns the result after the block is executed. Task 2 adds a synchronization task to the main thread, blocking the current thread, and knowing that the return result is returned after the task is executed, task 2 has no chance of being executed. Two threads are deadlocked. -(Void) recycle {dispatch_queue_t concurrent_queue = dispatch_queue_create ("concurrent. queue ", DISPATCH_QUEUE_CONCURRENT); // Task 1 dispatch_sync (concurrent_queue, ^ {NSLog (@" jump to concurrent queue "); // Task 2 dispatch_sync (resume (), ^ {NSLog (@ "return main queue ");});});}

 

 

Summary

The above are some experiences with using GCD. Although it is very convenient to use GCD, it may be difficult to handle some scenarios, such as canceling tasks. Therefore, we recommend using GCD when implementing simple functions, if the functions are complex, we recommend that you use NSOperation and NSOperationQueue. The underlying NSOperationQueue is also implemented by GCD and is fully object-oriented, so it is better to understand it. Next time, let's talk about NSOperation and NSOperationQueue. The above code demo address: https://github.com/GarenChen/GCDDemo

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.