The dispatch group mechanism is used to execute tasks based on system resources.

Source: Internet
Author: User

Dispatch group is a feature of GCD. It can group tasks. The caller can wait for the execution of this group of tasks to complete, or continue to execute after the callback function is provided. When this group of tasks is completed, the caller will be notified.

This function has multiple purposes. The most important and important usage is to merge multiple tasks to be executed concurrently into one group, therefore, the caller can know that these tasks can be fully executed only when they are appropriate.

Example: A series of compressed file tasks are represented as dispatch groups.

The following function creates a dispatch group:

Dispatch_group_tdispatch_group_create ();

The dispatch group is a simple data structure of the elder brother. This structure is no different from each other. Unlike the dispatch queue, the latter also has a identifier used to differentiate identities and want to Group tasks, there are two methods. The first method is to use the following function:

Void dispatch_group_async (dispatch_group_t group,

Dispatch_queue_t queue,

Dispatch_block_t block );

It is a variant of the general dispatch_async function. It has a parameter that represents the group of the blocks to be executed. There is also a way to create the dispatch group to which the task belongs. Use the following functions:

Void dispatch_group_enter (dispatch_group_t Group)

Void dispatch_group_leave (dispatch_group_t Group)

The former can increase the number of tasks to be executed in the group, while the latter can decrease the number. Therefore, after dispatch_group_enter is called, the corresponding dispatch_group_leave must exist. This is similar to the reference count. To use the reference count, the reserved operation and the release operation must correspond to each other to prevent memory leakage. When using the dispatch group, if no leave operation is performed after the enter operation is called, this group of tasks will never be executed.

The following function can be used to wait for the dispatch group to be executed:

Long dispatch_group_wait (dispatch_group_t group, dispatch_time_t timeout );

This function accepts two parameters: one is the group to be waited for, and the other is the value of the waiting time timeout. This parameter indicates how long the function will block when the dispatch group is executed. If the time required for executing the dispatch group is less than timeout, 0 is returned; otherwise, a non-zero is returned. This parameter can also be set to the constant dispatch_time_forever, which means that the function will always wait until the dispatch group is executed and will not time out ).

In addition to the above function waiting for the execution of the dispatch group, you can also use the following function:

Void dispatch_group_notify (dispatch_group_t group,

Dispatch_queue_t queue,

Dispatch_block_t block );

Unlike the wait function, a developer can consider this function to pass in a block. After the dispatch group is executed, the block will be executed on a specific thread. If the current thread should not be blocked, and the developer is notified when all the characters are completed, this approach is necessary.

To execute a task for each object in the array and wait for all tasks to be completed, you can use this GCD feature. The Code is as follows:

dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0);dispatch_group_t dispatchGroup = dispatch_group_create();for (id object in collection){    dispatch_group_async(dispatchGroup,queue,^{        [object performTask];    );      }                            diapatch_group_wait(dispatchGroup,DISPATCH_TIME_FOREVER);

If the current thread should not be blocked, use the notify function to replace wait:

dispatch_queue_t notifyQueue = dispatch_get_main_queue();dispatch_group_notify(dispatchGroup,notifyQueue,^{
  //continue processing after completing tasks
})

The queue selected during callback by Y should be determined based on the actual situation, forcing the use of the main queue column in the sample code. This is a common method. You can also use custom serial queues or global concurrent queues.

In this example, all tasks are distributed to the same queue. But it is not necessary to do so. You can also execute some tasks on a thread with a higher priority. At the same time, you can still include all the tasks in the agreed dispatch group and receive a notification when the execution is complete:

 dispatch_queue_t lowPriorityQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW,0);dispatch_queue_t highPriorityQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH,0);dispatch_group_t dispatchGroup = dispatch_group_create();for(id object in lowPriorityObjects){  dispatch_group_async(dispatchGroup,lowPriorityQueue,^{    [object performTask];  }} for(id object in highPriorityObjects){  dispatch_group_async(dispatchGroup,highPriorityQueue,^{    [object performTask];  }} dispatch_queue_t notifyQueue = dispatch_get_main_queue();dispatch_group_notify(dispatchGroup,notifyQueue,^{  //continue processing after completing tasks  });

In addition to submitting a task to a concurrent queue as above, you can also submit the task to each serial queue and use the dispatch group to track its execution status. However, if all tasks are in the same serial queue, dispatchgroup is useless. At this time, the task must be executed one by one, so you only need to submit a block after the complete task is submitted. This is equivalent to the General notify function waiting for the dispatch group to be executed and then the callback block:

dispatch_queue_t queue = dispatch_queue_create("com.effectiveobjectivc.queue",NULL);for(id object in collection){  dispatch_async(queue,^{    [object perfomTask];  });}dispatch_async(queue,^{//continue processing after completing tasks  });

The above Code shows that developers do not always need to use the dispatch group. Sometimes they can achieve the same effect by using a single queue with Standard Asynchronous distribution.

Why should I talk about "executing tasks based on system resources" in the title? Let's look back at the example of distributing tasks to the concurrent queue. In order to execute the blocks in the queue, gcd will automatically create new threads or reuse old threads at the appropriate time. If a concurrent queue is used, multiple threads may exist, which means that multiple blocks can be concurrently executed. In a concurrent queue, the number of concurrent threads used to execute tasks depends on various factors. GCD mainly determines these factors based on system resource conditions. Adding a CPU has multiple cores, in addition, a large number of tasks in the queue are waiting for execution. Therefore, gcd can assign multiple threads to the queue. This is a simple method provided by the dispatch group, you can execute a series of specified tasks concurrently and receive notifications at the end of all tasks. Because GCD has a concurrent queue mechanism, it can execute tasks concurrently based on available system resources. Developers can focus on business logic code without having to write complicated schedulers to process concurrent tasks.

In the previous sample code, we facilitated a collection and executed tasks on each of its elements, which can also be implemented using another GCD function:

Void dispatch-apply (size_t iterations, dispatch_queue_t queue, void (^ block) (size_t ));

This function executes the block repeatedly for a certain number of times, and the parameter value passed to the block increases progressively from 0 until "iterations-1". Its usage is as follows:

Dispatch_queue_t queue = dispatch_queue_create ("com, inclutiveobjectivec. queue", null );

Dispatch_apply (10, queue, ^ (size_t I ){

// Perform task

}

)

The same effect can be achieved with a simple for loop that increases from 0 to 9:

For (INT I = 0; I <10; I ++) {// perform task}

One thing to note: the queue used by dispatch_apply can be a concurrent queue. If a concurrent queue is used, the system can execute these blocks in parallel based on the resource status, this is the same as the sample code using dispatch group. If the collection to be processed in the for loop is an array, use dispatch_apply to rewrite it as follows:

Dispatch_queue_t queue = dispatch_get_global_queue (dispatch_queue_priority_default, 0 );

Dispatch_queue_apply (array. Coun, queue, ^ (size_t I ){

Id object = array [I];

[Object upload mtask];

};

);

This example again shows that dispatch_group is not always used. However, dispatch_apply is blocked until all tasks are completed. From this we can see that adding a block to the current queue (or a serial queue higher than the current queue in the System) will lead to a deadlock. If you want to execute tasks in the background, the dispatch group should be used.

Key points:

  • A series of tasks can be classified into a dispatch group, and developers can receive notifications when this group of tasks is completed.
  • With the dispatch group, you can execute multiple tasks simultaneously in the concurrent dispatching queue. In this case, gcd schedules these concurrent tasks based on the system resource status. If developers implement this function on their own, they need to write a lot of code

This article is excerpted from Article 44th "52 effective methods for writing high-quality iOS and OS X code.

The dispatch group mechanism is used to execute tasks based on system resources.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.