GCD, dispatch function introduction, gcddispatch Function

Source: Internet
Author: User

GCD, dispatch function introduction, gcddispatch Function

There are three methods for iOS multithreading:

  • NSThread
  • NSOperation
  • GCD (Grand Central Dispatch)

Among them, the solution proposed by Apple for multi-core parallel operations: GCD can access the thread pool and can be used throughout the application lifecycle. Generally, GCD will try its best to maintain some threads that are suitable for the machine architecture. When there is work demand, GCD will automatically use more processor cores to make full use of more powerful machine system performance at a time, in the past, iOS devices were single-core processors, and the thread pool was not very useful. However, mobile devices, including iOS devices, are increasingly moving towards multiple cores. Therefore, the thread pool in GCD is, in such equipment, the system performance can be improved and used more effectively.

GCD,Is undoubtedly the most convenient, based onCLanguage design. In useGCD ProcessThe most convenient is that you do not need to write the basic thread code, and you do not need to manually manage the lifecycle of the thread. create the required task and add it to the created queue.GCDWill be responsible for creating threads and scheduling tasks, and the system will directly provide thread management.

In such a multi-thread method, we often see in actual projects that the execution and exchange of data takes a long time in the app, as a result, the UI is often delayed. In this way, we can use multi-threaded methods to make the methods to be called run in the background and switch the UI interface on the main thread. This not only makes the user experience more friendly and beautiful, it also makes the program design orderly. This article briefly introduces the general usage of GCD andDispatch _ prefix MethodThe role and scope of use of the call.

 

For example, you can create four button events on the UI to analyze the running methods of program blocks executed by four different functions:

[Development Environment: Xcode: 7.2 iOS Simulator: iphone6 By: ah Left]

 

I. Use of GCD

GCD is the simplest for developers to call dispatch to add a series of asynchronous tasks to the queue for asynchronous execution.

The code is called as follows:

dispatch_async(dispatch_queue_t queue, dispatch_block_t block);
  • Async indicates asynchronous operation;
  • Queue is the queue we created in advance;
  • Block is also a "block". Let's execute the event module;

Of course, we can also use a synchronization task.dispatch_syncThe function is added to the corresponding queue, and this function will block the current call thread until the corresponding task is completed.

However, due to this synchronization feature, when a synchronization task is added to a queue that is executing a synchronization task in a real project, a deadlock occurs in the serial queue. In addition, a synchronization task may block the running of the main thread and cause an event to fail to respond.

 

It should be noted that calling dispatch_async does not let the block run, but adds the block to the end of the queue. A queue is not a thread, but an organizational block. (If you have learned the data structure knowledge, you will know the basic features of the queue, such as the queuing of the canteen, first come to the front, first hit the meal, that is, the principle of "first come first)

In GCD, there are two common public queues that can be called by developers:

There is another, dispatch_get_current_queue,Used to obtain the queue of the currently running task. It is mainly used for debugging,HoweverIniOS 6.0Later, Apple was abandoned because it was easy to cause deadlocks. For more information, see the official notes.

The call of these two public queues can solve the problem of the background execution task and the main thread used to update the UI interface,

The structure is as follows:

Dispatch_async (dispatch_get_global_queue (queue, 0), ^ {// execute tasks that require long time for logical computing in the globally shared concurrent queue; dispatch_async (dispatch_get_main_queue (), ^ {// return to the main thread to update the UI ;});});

In some projects, asynchronous download of images is involved. In this case, you can use this structure to allocate tasks:

// Asynchronous download image dispatch_async (dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^ {// first place the data download task in the global shared concurrency queue and execute NSURL * url = [NSURL URLWithString: @ "image URL"]; NSData * data = [[NSData alloc] initWithContentsOfURL: url]; UIImage * image = [UIImage imageWithData: data]; if (data! = Nil) {// return to the main thread to display the image dispatch_async (dispatch_get_main_queue (), ^ {self. imageView. image = image ;});}});

 

II, SerialQueue and parallelQueue 

1. Serial (Serial) Execution: refers to the same timeOnly one task can be executed at a time.. The thread pool only provides one thread for executing the task. Therefore, the last task must wait until the execution of the previous task ends.

Multiple tasks can be added to the queue and executed in FIFO order. However, when the program needs to execute a large number of tasks, although the system permits, given the resource allocation of the program, it should be handed over to the global concurrency queue for better system performance.

You can create a serial queue as follows:

Dispatch_queue_t serialQueue = dispatch_queue_create ("zuoA", NULL); // The first parameter is the queue name, which is usually used by the company's anti-domain name. The second parameter is queue-related attribute, which generally uses NULL

The Code explains what the FIFO order is.

-(IBAction) SerialQueue :( UIButton *) sender {dispatch_queue_t serialQueue = dispatch_queue_create ("zuoA", NULL); dispatch_async (serialQueue, ^ {sleep (3 ); NSLog (@ "A task") ;}); dispatch_async (serialQueue, ^ {sleep (2); NSLog (@ "B task") ;}); dispatch_async (serialQueue, ^ {sleep (1); NSLog (@ "C task ");});}

The console displays the following information:

15:04:11. 909 dispatch_queue multitasking GCD uses [92316: 2538875] A task 15:04:13. 910 dispatch_queue multitasking GCD uses [92316: 2538875] B task 15:04:14. 910 dispatch_queue multitasking GCD using [92316: 2538875] C task

As you can see, even if it takes several seconds, the tasks added later must wait until the previous tasks are completed before execution. The queue is in the "first-in-first-out" order, that is, the execution sequence depends on the sequence in which the developer adds the work tasks to the queue.

 

2. Parallel Execution: multiple tasks can be executed at the same time.

  • Load: Concurrent tasks are related to the system. The number of tasks that can be executed simultaneously is determined by the system's dynamic changes based on the application and the system status at this time.
  • Order: Because parallel queues are queues (this is nonsense T ^ T), the start time of each task is also in FIFO order, that is, the order of joining queue, however, the order of completion depends on the time consumed by the respective tasks.
  • Different from serial: although the startup time is the same, this is "concurrent execution", so you do not need to wait until the previous task is completed before proceeding to the next task.

On the code, find different...

-(IBAction) concurrentQueue :( UIButton *) sender {incluconcurrentqueue = Queue (queue, 0); dispatch_async (concurrentQueue, ^ {dispatch_async (concurrentQueue, ^ {sleep (3 ); NSLog (@ "A task") ;}); dispatch_async (concurrentQueue, ^ {sleep (2); NSLog (@ "B task") ;}); dispatch_async (concurrentQueue, ^ {sleep (1); NSLog (@ "C task ");});});}

The console displays the following information:

15:02:06. 911 dispatch_queue multitasking GCD uses [92294: 2537296] C task 15:02:07. 907 dispatch_queue multitasking GCD uses [92294: 2537147] B task 15:02:08. 908 dispatch_queue multitasking GCD using [92294: 2537177] A task

 

The time record on the left of the console is different from the serial queue. The parallel call of the three tasks in the parallel queue is different from that in the serial queue, it is already calling B and C, which significantly improves the thread execution speed and highlights the parallel characteristics of asynchronous operations executed by parallel queues;

We can see that in this Code, the difference is that the serial queue needs to create a new queue, while in the parallel queue, you only need to call the global share provided for us in iOS system.Dispatch_get_global_queue:

dispatch_queue_t concurrentQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); 

The first parameter is that the iOS system provides four scheduling methods for global shared queue, the main difference is that the priority is different:

We use the default DISPATCH_QUEUE_PRIORITY_DEFAULT method, and the second parameter on the right is reserved by Apple. There is no other meaning for the time being. Therefore, the default value is 0.

The advantage of concurrency is that it does not need to be executed in sequence like a serial, and concurrent execution can significantly increase the speed.

 

Iii. Use of dispatch_group_async

Sometimes, we may encounter such a situation, the UI part of the display, you need to complete several tasks, such as three images downloaded, to notify the UI interface has completed the task.

We can use a dispatch group to allocate concurrent program blocks. We can set all program blocks for asynchronous dispatch_async to loose or assign them to multiple threads for execution, after listening to this set of tasks, you can use dispatch_group_notify () to complete the last program block, such as the UI interface.

Code:

-(IBAction) groupQueue :( UIButton *) sender {dispatch_queue_t queue = queue (queue, 0); dispatch_group_t group = dispatch_group_create (); dispatch_group_async (group, queue, ^ {sleep (3); NSLog (@ "A task") ;}); dispatch_group_async (group, queue, ^ {sleep (2 ); NSLog (@ "B task") ;}); // After the tasks in the group are completed, notify and call the dispatch_group_policy (group, queue, ^ {NSLog (@ "");});}

The console displays the following information:

11:18:41. 306 dispatch_queue multitasking GCD uses [94865: 2718342] B task 11:18:42. 302 dispatch_queue multitasking GCD uses [94865: 2718341] A task 11:18:42. 303 dispatch_queue multitasking GCD [94865: 2718341]

In addition to verifying that all tasks in the dispatch group are completed, the task block added by notify is executed.

Readers may also find that the completion time of the entire task group is shorter than that of the two tasks! This is because we have carried out two types of computing at the same time ~

Of course, in actual development and application, the effect of this obvious shortening of running time depends on the required workload and available resources, as well as the availability of multiple CPU cores, therefore, in the context of increasingly sophisticated multi-core technology, such a multi-threaded technology will be used more effectively.

 

Iv. Use of dispatch_barrier_async

Dispatch_barrier (dispatch barrier) is the barrier block task that is executed after the current task is executed, and the subsequent task can be executed only after the barrier block is executed.

The obstacle feature is highlighted. How should I write the code?

According to the concurrency nature, we typed the following code in the barrierQueue method:

-(IBAction) barrierQueue :( UIButton *) sender {dispatch_queue_t queue = dispatch_get_global_queue (queue, 0); dispatch_async (queue, ^ {sleep (2 ); NSLog (@ "A task") ;}); dispatch_async (queue, ^ {sleep (1); NSLog (@ "B task") ;}); dispatch_barrier_async (queue, ^ {NSLog (@ "barrier task") ;}); dispatch_async (queue, ^ {sleep (1); NSLog (@ "C task ");});}

The console displays the following information:

13:18:47. 525 dispatch_queue multitasking GCD uses [95191: 2752854] barrier task 13:18:48. 529 dispatch_queue multitasking GCD uses [95191: 2752839] B task 13:18:48. 529 dispatch_queue multitasking GCD uses [95191: 2752844] C task 13:18:49. 528 dispatch_queue multitasking GCD using [95191: 2752840] A task

The execution sequence of a task is still the same as that of a parallel queue. barrier does not play its "obstacle" boundary. This is because barrier depends on the queue.ModelWhen the queue is shared globally, barrier cannot play its role. Instead, you need to create a new queue,

dispatch_queue_t queue = dispatch_queue_create("zuoA", DISPATCH_QUEUE_SERIAL);

 

The complete procedure is as follows:

-(IBAction) barrierQueue :( UIButton *) sender {dispatch_queue_t queue = dispatch_queue_create ("zuoA", queue); dispatch_async (queue, ^ {sleep (2 ); NSLog (@ "A task") ;}); dispatch_async (queue, ^ {sleep (1); NSLog (@ "B task") ;}); dispatch_barrier_async (queue, ^ {NSLog (@ "barrier task") ;}); dispatch_async (queue, ^ {sleep (1); NSLog (@ "C task ");});}

The console displays the following information:

13:30:14. 251 dispatch_queue multitasking GCD uses [95263: 2759658] A task 13:30:15. 255 dispatch_queue multitasking GCD uses [95263: 2759658] B task 13:30:15. 255 dispatch_queue multitasking GCD uses [95263: 2759658] barrier task 13:30:16. 256 dispatch_queue multitasking GCD using [95263: 2759658] C task

This is what we want to achieve: it is true that the barrier task can be executed only after the previous tasks A and B are completed, and finally the C task can be executed.

 

So why do dispatch_queue_create use DISPATCH_QUEUE_SERIAL? Can I use others? The answer is yes. Replace the parameter with DISPATCH_QUEUE_SERIAL.

The following output is displayed:

13:34:23. 855 dispatch_queue multitasking GCD uses [95294: 2762604] B task 13:34:24. 853 dispatch_queue multitasking GCD uses [95294: 2762603] A task 13:34:24. 853 dispatch_queue multitasking GCD uses [95294: 2762603] barrier task 13:34:25. 856 dispatch_queue multitasking GCD using [95294: 2762603] C task

That is to say, tasks A, B, and C are executed in the order of queues, only because of the barrier block's "barrier" function, this makes the subsequent C tasks run only after the barrier block is executed;

 

5. dispatch_suspend (paused) and dispatch_resume (continued)

  • Pause: When you need to pause a queue, call dispatch_suspend (queue) to stop the queue from executing the block object.queueThe reference count increases;
  • Continue: CallDispatch_resume (queue). At this time, the queue starts to execute the block operation,queueThe reference count is reduced;

Note that suspend and resume are asynchronous and only called between block blocks and exist in pairs.

 

 

There are some otherDispatch functions, such

Dispatch_once: can make a specific block inThe application is executed only once throughout the lifecycle ~

Dispatch_apply: execute a code snippet n times (which can be set by the developer ).

Dispatch_after: this function can be used when we need to wait a few seconds for an operation;

 

Note:

1. In the above example, we have not used manual internal management of its memory, because the system will automatically manage it.

If your deployment target is lower than iOS 6.0 or Mac OS X 10.8

You need to manage your own GCD objects. Using (dispatch_retain, dispatch_release), ARC will not manage them. For the lowest sdk version> = ios6.0, GCD objects have been included in the management scope of ARC, we do not need to manually call the dispatch_release and dispatch_retain functions to increase or decrease the reference count to manage the memory. 2. The reason why we have been talking about the dispatch function, instead of the method, is that the method is classified and the object is all. dispatch is a function in C language. The function and method are different, and the function belongs to the file. This article aims to introduce the use of dispatch _ and other functions. The text is too long. Sorry)

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.