IOS GCD User Guide

Source: Internet
Author: User

IOS GCD User Guide

Grand Central Dispatch (GCD) is one of the technologies used to execute tasks asynchronously. Generally, the code used for thread management described in the application is implemented in the system level. Developers only need to define the tasks they want to execute and append them to the appropriate Dispatch Queue. Then, GCD can generate necessary threads and schedule the tasks to be executed. Because thread management is implemented as a part of the system, it can be managed in a unified manner or executed tasks, which is more efficient than the previous threads.


Dispatch Queue

Dispatch Queue is a Queue used to execute tasks and is one of the most basic elements in GCD.

There are two types of Dispatch Queue:

Serial Dispatch Queue: Execute the Concurrent Dispatch Queue one by one in the order of adding to the Queue (first, first, and foremost). In short, the tasks in the Concurrent execution Queue use only one thread in the Serial Dispatch Queue, concurrent Dispatch Queue uses multiple threads (the system determines the number of threads used ). You can obtain the Dispatch Queue in two ways. The first method is to create one by yourself:

Let myQueue: dispatch_queue_t = dispatch_queue_create ("com. xxx", nil)

The first parameter is the queue name. Generally, the full domain name is in reverse order. Although a Queue name can be not specified, a Queue with a name can be better debugged in case of problems. When the second parameter is nil, Serial Dispatch Queue is returned, in the example aboveDISPATCH_QUEUE_CONCURRENTReturns Concurrent Dispatch Queue.

Note thatOS X 10.8, iOS 6, and later versionsIn use, Dispatch Queue will be automatically managed by ARC. If it is a previous version, you need to manually release it as follows:

Let myQueue: dispatch_queue_t = dispatch_queue_create ("com. xxx", nil)

Dispatch_async (myQueue, {()-> Void in

Println ("in Block ")

})

Dispatch_release (myQueue)

The above method is manually created to obtain the Dispatch Queue. The second method is to directly obtain the Dispatch Queue provided by the system.

The Dispatch Queue to be obtained is of the following two types:

Main Dispatch QueueGlobal Dispatch Queue/Concurrent Dispatch Queue generally only obtains the Main Dispatch Queue when the UI needs to be updated. In other cases, the Global Dispatch Queue can meet the requirements:

// Obtain the Main Dispatch Queue

Let mainQueue = dispatch_get_main_queue ()

// Obtain Global Dispatch Queue

Let globalQueue = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)

The Global Dispatch Queue is actually a Concurrent Dispatch Queue, and the Main Dispatch Queue is actually Serial Dispatch Queue (and only one ). You can specify the priority when obtaining the Global Dispatch Queue. You can determine the priority based on your actual situation. Generally, we can obtain the Dispatch Queue in the second way.
Dispatch_after

Dispatch_after allows us to add tasks to the queue for delayed execution. For example, we want a Block to be executed after 10 seconds:

Var time = dispatch_time (DISPATCH_TIME_NOW, (Int64) (10 * NSEC_PER_SEC ))

Dispatch_after (time, globalQueue) {()-> Void in

Println ("executed in 10 seconds ")

}

NSEC_PER_SEC indicates the number of seconds. It also provides NSEC_PER_MSEC for millisecond.

The true meaning of the above dispatch_after statement is to add the task to the queue after 10 seconds, which does not mean that the task will be executed after 10 seconds. In most cases, this function can meet our expectation, the problem may occur only when the time requirement is very accurate.

You can obtain a value of the dispatch_time_t type in two ways. The first method is the dispatch_time function, and the other is the dispatch_walltime function, dispatch_walltime requires a timespec struct to obtain dispatch_time_t. Dispatch_time is usually used to calculate relative time, And dispatch_walltime is used to calculate absolute time. I wrote a Swift method to convert NSDate to dispatch_time_t:

Func getDispatchTimeByDate (date: NSDate)-> dispatch_time_t {

Let interval = date. timeIntervalSince1970

Var second = 0.0

Let subsecond = modf (interval, & second)

Var time = timespec (TV _sec: _ darwin_time_t (second), TV _nsec: (Int) (subsecond * (Double) (NSEC_PER_SEC )))

Return dispatch_walltime (& time, 0)

}

This method receives an NSDate object, converts NSDate to the timespec struct required by dispatch_walltime, and then returns dispatch_time_t, which is also executed in 10 seconds, the previous code needs to be modified:

Var time = getDispatchTimeByDate (NSDate (timeIntervalSinceNow: 10 ))

Dispatch_after (time, globalQueue) {()-> Void in

Println ("executed in 10 seconds ")

}

This is an example of using dispatch_after with absolute time.


Dispatch_group may often have the following situation: we now have three blocks to execute, and we don't care about their execution sequence, we only want to execute an operation after the three blocks are executed. At this time, we need to use dispatch_group:

Let globalQueue = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)

Let group = dispatch_group_create ()


Dispatch_group_async (group, globalQueue) {()-> Void in

Println ("1 ")

}

Dispatch_group_async (group, globalQueue) {()-> Void in

Println ("2 ")

}

Dispatch_group_async (group, globalQueue) {()-> Void in

Println ("3 ")

}

Dispatch_group_policy (group, globalQueue) {()-> Void in

Println ("completed ")

}

The output order is irrelevant to the order in which the Queue is added, because the Queue is Concurrent Dispatch Queue, but the output of "completed" must be at the end:
312completed
In addition to the dispatch_group_notify function, you can use

Let globalQueue = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)

Let group = dispatch_group_create ()


Dispatch_group_async (group, globalQueue) {()-> Void in

Println ("1 ")

}

Dispatch_group_async (group, globalQueue) {()-> Void in

Println ("2 ")

}

Dispatch_group_async (group, globalQueue) {()-> Void in

Println ("3 ")

}

// Use the dispatch_group_wait Function

Dispatch_group_wait (group, DISPATCH_TIME_FOREVER)

Println ("completed ")

It should be noted that dispatch_group_wait will actually put the current thread in the waiting state. That is to say, if the main thread executes dispatch_group_wait, the main thread will be stuck before the preceding Block is executed. It can be noted that the second parameter of dispatch_group_wait is the specified Timeout time. If it is set to DISPATCH_TIME_FOREVER (in the example above), it will wait permanently until all the blocks above are executed. In addition, you can also specify the specific wait time. Based on the return value of dispatch_group_wait, you can determine whether the above block has been executed or the wait time has timed out. Finally, like the previous dispatch_queue creation OS X 10.8, iOS 6, and later versionsThe Dispatch Group will be automatically managed by ARC. If it is a previous version, you need to manually release it.
Dispatch_barrier_async

Dispatch_barrier_async is the same as its name. Add a "fence" to the tasks executed in the queue, and the blocks that have been executed will continue to be executed before the "fence" is added, when dispatch_barrier_async starts execution, other blocks are waiting. After the dispatch_barrier_async task is executed, the subsequent blocks are executed. Let's write an example. Assume that this example contains the reading and writing parts:

Func writeFile (){

NSUserDefaults. standardUserDefaults (). setInteger (7, forKey: "Integer_Key ")

}


Func readFile (){

Print (NSUserDefaults. standardUserDefaults (). integerForKey ("Integer_Key "))

}

To write a file, write a number 7 in NSUserDefaults. to read the file, print the number. To avoid reading a file by a thread, use the dispatch_barrier_async function:

NSUserDefaults. standardUserDefaults (). setInteger (9, forKey: "Integer_Key ")

Let globalQueue = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)

Dispatch_async (globalQueue) {self. readFile ()}

Dispatch_async (globalQueue) {self. readFile ()}

Dispatch_async (globalQueue) {self. readFile ()}

Dispatch_async (globalQueue) {self. readFile ()}

Dispatch_barrier_async (globalQueue) {self. writeFile (); self. readFile ()}

Dispatch_async (globalQueue) {self. readFile ()}

Dispatch_async (globalQueue) {self. readFile ()}

Dispatch_async (globalQueue) {self. readFile ()}

We first initialize a 9 to the Integer_Key of NSUserDefaults, and then execute the dispatch_barrier_async function in the middle. Because the Queue is A Concurrent Dispatch Queue, the number of Concurrent threads is determined by the system, if other blocks (including the preceding four blocks) are not executed when dispatch_barrier_async is added, the tasks in dispatch_barrier_async are executed first, and all other blocks are in the waiting state. If a block already exists when dispatch_barrier_async is added, dispatch_barrier_async will wait until the blocks are executed.


Dispatch_applydispatch_apply will execute a specified block for a specified number of times. If you want to execute the same block for all elements in an array, this function is very useful. The usage is very simple. It specifies the number of executions and Dispatch Queue, the block callback will include an index, and then you can determine which element is being operated based on the index:

Let globalQueue = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)

Dispatch_apply (10, globalQueue) {(index)-> Void in

Print (index)

}

Print ("completed ")

Because it is Concurrent Dispatch Queue, it cannot be ensured that the element of the index is executed first, but "completed" must be printed at the end, because the dispatch_apply function is synchronous, the thread will wait here during execution, so generally, we should use the dispatch_apply function in an asynchronous thread:

Let globalQueue = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)

Dispatch_async (globalQueue, {()-> Void in

Dispatch_apply (10, globalQueue) {(index)-> Void in

Print (index)

}

Print ("completed ")

})

Print ("before dispatch_apply ")


Dispatch_suspend/dispatch_resume in some cases, we may want to temporarily stop the Dispatch Queue and resume processing at a specific time. Then we can use the dispatch_suspend and dispatch_resume functions:

// Pause

Dispatch_suspend (globalQueue)

// Restore

Dispatch_resume (globalQueue)

If a block is being executed when it is paused, the block execution will not be affected. Dispatch_suspend only affects blocks that have not yet started execution.
Dispatch Semaphore semaphores are widely used in multi-thread development. When a thread enters a key code segment, the thread must obtain a semaphores. Once the key code segment is complete, the thread must release the semaphore. Other threads that want to enter the key code segment must wait for the previous thread to release the semaphore. The specific method of semaphores is: when the signal count is greater than 0, each incoming thread will reduce the count by 1 until it changes to 0, and other threads will not enter after it changes to 0, in the waiting state; after the task is executed, the thread releases the signal and Adds 1 to the count, so the loop goes on. In the following example, 10 threads are used, but only one thread is executed at the same time. Other threads are in the waiting state:

Let globalQueue = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)

Let semaphore = dispatch_semaphore_create (1)

For I in 0... 9 {

Dispatch_async (globalQueue, {()-> Void in

Dispatch_semaphore_wait (semaphore, DISPATCH_TIME_FOREVER)

Let time = dispatch_time (DISPATCH_TIME_NOW, (Int64) (2 * NSEC_PER_SEC ))

Dispatch_after (time, globalQueue) {()-> Void in

Print ("executed in 2 seconds ")

Dispatch_semaphore_signal (semaphore)

}

})

}

The thread that obtains the semaphore releases the amount of information after 2 seconds, which is equivalent to the execution every 2 seconds. The above example shows that in GCD, you can use the dispatch_semaphore_create function to initialize a semaphore and specify the initial value of the semaphore. Use the dispatch_semaphore_wait function to allocate the semaphore and reduce the count by 1, if the value is 0, it is in the waiting state. Use the dispatch_semaphore_signal function to release the semaphore and Add 1 to the count. In addition, dispatch_semaphore_wait also supports timeout. You only need to specify the timeout value for the second parameter. Similar to the dispatch_group_wait function of the Dispatch Group, the parameter can be determined by the return value. This function also needs to be noted, if OS X 10.8, iOS 6, and later versionsIn use, Dispatch Semaphore will be automatically managed by ARC. If it is a previous version, you need to manually release it.


The dispatch_oncedispatch_once function is usually used in singleton mode. It ensures that only one code segment is executed once during the program running. If we want to create a singleton class through dispatch_once, we can do this in Swift:

Class SingletonObject {

Class var sharedInstance: SingletonObject {

Struct Static {

Static var onceToken: dispatch_once_t = 0

Static var instance: SingletonObject? = Nil

}

Dispatch_once (& Static. onceToken ){

Static. instance = SingletonObject ()

}

Return Static. instance!

}

}

This ensures that the code is executed only once through the security mechanism of GCD.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.