Common API Usage Guidelines for IOS GCD

Source: Internet
Author: User
Tags gcd readfile semaphore

IOS GCD User Guide

Grand Central Dispatch (GCD) is one of the techniques for performing tasks asynchronously. The code for thread management described in the application is generally implemented at the system level. Developers only need to define the tasks they want to perform and append them to the appropriate dispatch queue, GCD can generate the necessary threads and plan to perform the task. Because thread management is implemented as part of a system, it can be managed uniformly and can perform tasks that are more efficient than previous threads.

Dispatch Queue

The Dispatch queue is the one of the most basic elements in GCD that is used to perform tasks.

The Dispatch queue is divided into two types:

    • Serial Dispatch queue, in order to add to queue (FIFO) one after another execution
    • Concurrent Dispatch queue, concurrent execution of tasks in queues
In short, the Serial Dispatch queue uses only one thread, Concurrent Dispatch queue uses multiple threads (how many are used, as determined by the system). There are two ways to get the dispatch Queue, the first way is to create one yourself:

Let myqueue:dispatch_queue_t = Dispatch_queue_create ("com.xxx", nil)

The first parameter is the name of the queue, which is generally the full domain name using reverse. Although the queue can not be given a name, but the name of the queue allows us to better debug when encountering problems, when the second parameter is nil returns the serial Dispatch queue, as the above example, when specified as Dispatch_queue_ returns CONCURRENT Dispatch Queue when CONCURRENT.

It is important to note that if used in OS X 10.8 or iOS 6 and later , the Dispatch queue will be automatically managed by arc, and if this is the previous version, you need to manually release it as follows:

Let myqueue:dispatch_queue_t = Dispatch_queue_create ("com.xxx", nil)

Dispatch_async (Myqueue, {()-Void in

println ("in Block")

})

Dispatch_release (Myqueue)

The above is obtained by manually creating the dispatch queue, the second way is to get the system-provided dispatch queue directly.

The dispatch queue to get is nothing more than two types:

    • Main Dispatch Queue
    • Global Dispatch queue/concurrent Dispatch Queue
In general, we only get the main Dispatch queue when we need to update the UI, and in other cases the global Dispatch queue satisfies the requirements:

Get main Dispatch Queue

Let Mainqueue = Dispatch_get_main_queue ()

Get Global Dispatch Queue

Let Globalqueue = Dispatch_get_global_queue (dispatch_queue_priority_default, 0)

The resulting global Dispatch queue is actually a concurrent Dispatch Queue,main Dispatch queue is actually serial Dispatch queue (and only one). When acquiring the global Dispatch queue, you can specify a priority, and you can decide which priority to use according to your actual situation. In general, we can get the dispatch queue in the second way. Dispatch_after

Dispatch_after allows us to add queued tasks to the deferred execution, for example to allow a block to execute in 10 seconds:

var time = Dispatch_time (Dispatch_time_now, (Int64) (Ten * nsec_per_sec))

Dispatch_after (time, Globalqueue) {(), Void in

println ("Execute in 10 seconds")

}

Nsec_per_sec represents the number of seconds, and it also provides the nsec_per_msec for milliseconds.

The true meaning of the above sentence is that after 10 seconds the task is added to the queue, not the execution after 10 seconds, most of the time the function can achieve our expectations, only in the case of very accurate timing requirements can be problematic.

Getting a value of a dispatch_time_t type can be obtained in two ways, the first way, through the Dispatch_time function, and the other through the Dispatch_walltime function, Dispatch_ Walltime need to use a TIMESPEC structure to get dispatch_time_t. Usually dispatch_time is used to calculate the relative time, Dispatch_walltime is used to calculate the absolute time, and I wrote a Swift method that turns NSDate into dispatch_time_t:

Func getdispatchtimebydate (date:nsdate), dispatch_time_t {

Let interval = date.timeintervalsince1970

var second = 0.0

Let Subsecond = Modf (interval, &second)

var time = Timespec (tv_sec: __darwin_time_t (second), Tv_nsec: (Int) (Subsecond * (Double) (nsec_per_sec)))

Return Dispatch_walltime (&time, 0)

}

This method receives a NSDate object, then turns the nsdate into the TIMESPEC structure required by the Dispatch_walltime, and then returns the dispatch_time_t, again in 10 seconds, before the code in the calling section needs to be modified to:

var time = Getdispatchtimebydate (NSDate (timeintervalsincenow:10))

Dispatch_after (time, Globalqueue) {(), Void in

println ("Execute in 10 seconds")

}

This is an example of using Dispatch_after with absolute time.

Dispatch_group may often have a situation where we now have 3 blocks to execute, and we don't care about the order in which they are executed, we just want to perform an operation after the 3 blocks have been executed. This is the time to use the Dispatch_group:

Let Globalqueue = Dispatch_get_global_queue (dispatch_queue_priority_default, 0)

Let group = Dispatch_group_create ()

Dispatch_group_async (Group, Globalqueue) {(), Void in

println ("1")

}

Dispatch_group_async (Group, Globalqueue) {(), Void in

println ("2")

}

Dispatch_group_async (Group, Globalqueue) {(), Void in

println ("3")

}

Dispatch_group_notify (Group, Globalqueue) {(), Void in

println ("Completed")

}

The order of the outputs is independent of the order in which the queues are added, because the queue is concurrent Dispatch queue, but the output of "completed" must be in the final:
312completed
In addition to using the Dispatch_group_notify function to get the final notification, you can also use the

Let Globalqueue = Dispatch_get_global_queue (dispatch_queue_priority_default, 0)

Let group = Dispatch_group_create ()

Dispatch_group_async (Group, Globalqueue) {(), Void in

println ("1")

}

Dispatch_group_async (Group, Globalqueue) {(), Void in

println ("2")

}

Dispatch_group_async (Group, Globalqueue) {(), Void in

println ("3")

}

Using the Dispatch_group_wait function

Dispatch_group_wait (Group, Dispatch_time_forever)

println ("Completed")

It is important to note that dispatch_group_wait actually causes the current thread to be in a waiting state, that is, if the dispatch_group_wait is executed on the main thread, the main thread will be stuck in a dead state until the block above is executed. It can be noted that the second parameter of dispatch_group_wait is the specified time-out, and if specified as Dispatch_time_forever (as in the above example), it will wait forever until all the blocks above have been executed. You can also specify a specific wait time, based on the return value of the dispatch_group_wait to determine whether the above block is executed or waiting for a timeout. Finally, as with the previous creation of Dispatch_queue, if the OS X 10.8 or iOS 6 and later, the Dispatch group will be automatically managed by arc, and if it is in the previous version, it needs to be released manually. Dispatch_barrier_async

Dispatch_barrier_async, like its name, adds "fences" to the tasks performed by the queue, and blocks that have already been executed before adding "fences" will continue to execute when Dispatch_barrier_ When async starts executing, the other blocks are waiting, and after the Dispatch_barrier_async is executed, the block will be executed later. Let's simply write an example that assumes that this example has a read file and a section that writes a file:

Func WriteFile () {

Nsuserdefaults.standarduserdefaults (). Setinteger (7, Forkey: "Integer_key")

}

Func ReadFile () {

Print (Nsuserdefaults.standarduserdefaults (). Integerforkey ("Integer_key"))

}

Write a file just write a number 7 in Nsuserdefaults, read just print out this number. We want to avoid using the Dispatch_barrier_async function when writing a file that is just a thread read:

Nsuserdefaults.standarduserdefaults (). Setinteger (9, Forkey: "Integer_key")

Let Globalqueue = Dispatch_get_global_queue (dispatch_queue_priority_default, 0)

Dispatch_async (globalqueue) {self.readfile ()}

Dispatch_async (globalqueue) {self.readfile ()}

Dispatch_async (globalqueue) {self.readfile ()}

Dispatch_async (globalqueue) {self.readfile ()}

Dispatch_barrier_async (globalqueue) {self.writefile (); Self.readfile ()}

Dispatch_async (globalqueue) {self.readfile ()}

Dispatch_async (globalqueue) {self.readfile ()}

Dispatch_async (globalqueue) {self.readfile ()}

We first initialize a 9 to Nsuserdefaults's Integer_key, then execute the Dispatch_barrier_async function in the middle, because this queue is a concurrent Dispatch queue, Can concurrently how many threads is determined by the system, if add Dispatch_barrier_async, the other block (including the above 4 blocks) has not started to execute, then will perform the task in the Dispatch_barrier_async first, All other blocks are in a waiting state. If you add Dispatch_barrier_async, the block is already executing, then Dispatch_barrier_async will wait until the blocks are executed.

Dispatch_applydispatch_apply will execute the specified number of times for a specified block. This function is useful when you want to perform the same block for all elements in an array, with simple usage, specifying the number of executions and the dispatch Queue, with an index in the block callback. You can then determine which element is currently being manipulated based on this index:

Let Globalqueue = Dispatch_get_global_queue (dispatch_queue_priority_default, 0)

Dispatch_apply (globalqueue) {(index), Void in

Print (index)

}

Print ("Completed")

Because it is the concurrent Dispatch Queue, there is no guarantee that the element of the index is executed first, but "completed" must be printed at the end, because the dispatch_apply function is synchronous and the thread is waiting here during execution, so in general, We should use the Dispatch_apply function in an asynchronous thread:

Let Globalqueue = Dispatch_get_global_queue (dispatch_queue_priority_default, 0)

Dispatch_async (Globalqueue, {()-Void in

Dispatch_apply (globalqueue) {(index), Void in

Print (index)

}

Print ("Completed")

})

Print ("Before dispatch_apply")

Dispatch_suspend/dispatch_resume In some cases, we might want to stop the dispatch queue for a while and then resume processing at some point, so that we can use Dispatch_suspend and dispatch _resume function:

Time out

Dispatch_suspend (Globalqueue)

Recovery

Dispatch_resume (Globalqueue)

When paused, there is no effect on the execution of the block if a block is already executing. Dispatch_suspend will only have an impact on blocks that have not yet begun to execute. Dispatch semaphore semaphores are widely used in multithreaded development, and when a thread has to get a semaphore before it enters a critical piece of code, the thread must release the semaphore once the critical snippet is complete. Other threads that want to enter the critical snippet must wait for the previous thread to release the semaphore. The specific method of semaphore is: when the signal count is greater than 0 o'clock, each incoming thread causes the count to be reduced by 1 until it becomes 0, and the other threads will not be in the wait state until 0, and the thread that finishes the task releases the signal and adds 1 to the count, so the loop goes on. In this example, 10 threads are used, but only one is executed, and the other threads are in the waiting state:

Let Globalqueue = Dispatch_get_global_queue (dispatch_queue_priority_default, 0)

Let semaphore = dispatch_semaphore_create (1)

For i in 0 ... 9 {

Dispatch_async (Globalqueue, {()-Void in

Dispatch_semaphore_wait (semaphore, Dispatch_time_forever)

Let time = Dispatch_time (Dispatch_time_now, (Int64) (2 * nsec_per_sec))

Dispatch_after (time, Globalqueue) {(), Void in

Print ("Execute after 2 seconds")

Dispatch_semaphore_signal (semaphore)

}

})

}

The thread that gets the semaphore releases the amount of information after 2 seconds, which is equivalent to executing every 2 seconds. As can be seen from the above example, in GCD, a semaphore can be initialized with the Dispatch_semaphore_create function, the initial value of the semaphore must be specified, the semaphore is allocated using the Dispatch_semaphore_wait function, and the count is reduced by 1. Wait for 0 o'clock, use the Dispatch_semaphore_signal function to release the semaphore and add 1 to the count. In addition, dispatch_semaphore_wait also support timeouts, only need to give its second parameter to specify the time-out, similar to the dispatch group Dispatch_group_wait function, can be judged by the return value. This function also needs to be noted if the OS X 10.8 or iOS 6 and later, the Dispatch semaphore will be automatically managed by arc, and if it is in the previous version, it needs to be released manually. The Dispatch_oncedispatch_once function is typically used in singleton mode, which guarantees that a piece of code executes only once during the program's run, and if we want to create a singleton class through dispatch_once, this can be done in Swift:

Class Singletonobject {

class Var Sharedinstance:singletonobject {

struct Static {

static var oncetoken:dispatch_once_t = 0

static Var instance:singletonobject? = Nil

}

Dispatch_once (&static.oncetoken) {

Static.instance = Singletonobject ()

}

Return static.instance!

}

}

This will ensure that this code is executed only once through the GCD security mechanism.

Portal: CSDN IOS Forum, if there is any problem, I will try to answer, but also hope to learn together


If you want to reprint, please indicate the source, thank you!

Common API Usage Guidelines for IOS GCD

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.