GCD User's Manual for iOS developers

Source: Internet
Author: User
Tags gcd

Grand Central Dispatch, or GCD, is an extremely powerful tool. It gives you some of the underlying components, like queues and semaphores, so you can get useful multithreaded results in some interesting ways. Unfortunately, this C-based API is a bit cryptic, and it does not clearly tell you how to use this underlying component to implement a higher-level approach. In this article, I want to describe some of the uses that you can make available to your underlying components through GCD.

Background work

Perhaps the simplest usage, GCD allows you to do some work on the background thread, and then go back to the main thread to continue processing because components like those belonging to UIKit can only be used (mostly) in the main thread.

in This guide, I'll use the Dosomeexpensivework () method to represent some long-running tasks with return values.

This pattern can be set up like this:

Let defaultpriority =0) dispatch_async (backgroundqueue, {    = dosomeexpensivework ( )    Dispatch_async (Dispatch_get_main_queue (), {        //use' result ' somehow    })}) 

In practice, I never use any queue priority except Dispatch_queue_priority_default. This returns a queue that can support the execution of hundreds of threads. You can also use the Dispatch_queue_create method to create your own queue if your performance-consuming work always occurs in a specific background queue. Dispatch_queue_create can create a queue of any name, whether it is serial or parallel.

Note that each call uses Dispatch_async and does not use Dispatch_sync. The Dispatch_async is returned before the block executes, and Dispatch_sync waits until the block is completed. Internal calls can use Dispatch_sync (because no matter when it returns), but the external must call Dispatch_async (otherwise, the main thread will be blocked).

Create a single case

Dispatch_once is an API that can be used to create a singleton. It is no longer necessary in swift because there is a simpler way to create a singleton in Swift. For later, of course, I wrote it here (with Objective-c).

+ (instancetype) sharedinstance {      static  dispatch_once_t Oncetoken;       Static ID sharedinstance;      Dispatch_once (&oncetoken, ^{          = [[Self alloc] init];      });       return sharedinstance;  

Flatten a complete block

Now GCD is starting to get interesting. With a semaphore, we can have one thread pause at any time until another thread sends a signal to it. This semaphore, like the rest of GCD, is thread-safe, and they can be triggered from anywhere.

When you need to synchronize an asynchronous API that you cannot modify, you can use semaphores to solve the problem.

// On a background queuedispatch_semaphore_t semaphore = dispatch_semaphore_create (0) dosomeexpensiveworkasynchronously (completionblock: {    dispatch_semaphore_signal (semaphore)}) Dispatch_ Semaphore_wait (semaphore, Dispatch_time_forever)//Theexpensive asynchronous

DISPATCH_SEMAPHORE_WAIT will block the thread until dispatch_semaphore_signal is called. This means that signal must be called in another thread because the current thread is completely blocked. In addition, you should not call wait in the main thread, only in the background thread.

You can choose any timeout when calling dispatch_semaphore_wait, but I tend to use dispatch_time_forever all the time.

This may not be entirely obvious, why you have to flatten the existing block code, but it's really handy. One of the things I've recently used is to perform a series of asynchronous tasks that must occur consecutively. This simple abstraction, which is used in this way, is called  asyncserialworker :

 Typealias doneblock = ()-> () Typealias workblock  = (doneblock)-> ()  class   Asyncserialworker { Span style= "color: #0000ff;" >private  Let Serialqueue = dispatch_queue_create ( com.khanlou.serial.queue    Dispatch_ Queue_serial) func enqueuework (work:workblock) {Dispatch_async (serialqueue) {Let semaphore                 = Dispatch_semaphore_create (0   

This small class can create a serial queue and allow you to add work to the block. When your work is done, Workblock will call Doneblock, turn on the semaphore, and allow the serial queue to continue.

Limit the number of concurrent blocks

In the previous example, the semaphore as a simple flag, but it can also be used as a limited resource counter. If you want to open a specific number of connections on a specific resource, you can use the following code:

 class   Limitedworker { private  Let Concurrentqueue = dispatch_queue_create ( " com.khanlou.concurrent.queue    private   let semaphore:dispatch_semaphore_t Init (limit:int) {semaphore  = Dispatch_sem Aphore_create (limit)} func enqueuework (work: () -> ()) {Dispatch_as Ync (concurrentqueue) {dispatch_semaphore_wait (semaphore, Dispatch_time_forever) work () d Ispatch_semaphore_signal (semaphore)}}}  

This example from Apple's Concurrency Programming Guide . They can better explain what's going on here:

When you create a semaphore, you can specify the number of resources available to you. This value is the initial count variable of the semaphore. Every time you wait for a semaphore to send a signal, this dispatch_semaphore_wait method decrements the Count variable by 1. If the resulting value is negative, the function tells the kernel to block your thread. At the other end, the dispatch_semaphore_signal function increments the count variable by 1 to indicate that the resource has been disposed. If there is a task blocking and waiting for the resource, one of them is then released and does its work.

The effect is similar to Maxconcurrentoperationcount in Nsoperationqueue. If you use the original GCD queue instead of the Nsoperationqueue, you can use the signal suzerain to limit the number of blocks that are executed at the same time.

One notable thing is that every time you call Enqueuework, if you turn on the semaphore limit, a new thread is started. If you have a low-volume and working queue, you can create hundreds of of threads. As always, configure the file first, and then change the code.

Wait for many concurrent tasks to complete

If you have more than one block of work to perform, and you need to send a notification when they are collectively completed, you can use group. Dispatch_group_async allows you to add work to the queue (the work inside the block should be synchronized) and record how many items have been added. Note that work can be added to different queues in the same dispatch group, and they can be tracked. When all the tracked work is done, the block starts running dispatch_group_notify, like a complete block.

dispatch_group_t Group = dispatch_group_create () for in somearray {    Dispatch_group _async (Group, backgroundqueue, {        performexpensivework (item:item)    })}dispatch_group_notify (Group, Dispatch _get_main_queue (), {    //  All the work was complete}

Having a complete block is a good example of flattening a function. Dispatch Group thinks that when it returns, the block should be completed, so you need this block to wait until the other work is done.

There are more manual ways to use the dispatch groups, especially if your performance-consuming work is already asynchronous:

// must is on a background threaddispatch_group_t group = dispatch_group_create ()for in  Somearray {    dispatch_group_enter (group)    performexpensiveasyncwork (Item:item, Completionblock: {        Dispatch_group_leave (group)    })}dispatch_group_wait (group, dispatch_time_forever)// all the work was complete

This code is more complex, but reading in one line can help you understand it. Just like the semaphore, groups also keeps thread safe and is an internal counter you can manipulate. You can use this counter to ensure that many long running tasks are completed before the block is completed. Use "enter" to increment the counter and decrement the counter with "leave". Dispatch_group_async handles all of these details for you, so I'm willing to use it as much as possible.

The last point in this code is the wait method: it blocks the thread and waits until the counter is 0 to continue execution. Note that even if you use ENTER/LEAVEAPI, you can add a dispatch_group_notify block to the queue. The reverse is also true: when you use the Dispatch_group_async API you can also use the Dispatch_group_wait.

Dispatch_group_wait, just like dispatch_semaphore_wait, you can set a timeout. Once again, Dispatch_time_forever is very good enough to use, and I never felt the need to use the other to set the timeout. Of course, like dispatch_semaphore_wait, never use dispatch_group_wait in the main thread.

The biggest difference between the two is that using notify can be called entirely from the main thread, while using wait must occur in the background queue (at least the wait part, because it will completely block the current queue).

Quarantine queue

swift language are value types. When they are changed, their references are completely replaced by the new structure. Of course, because Swift objects that update instance variables are not atomic, they are not thread-safe. A dual thread can update a dictionary at the same time (for example, by adding a value), and two tries to write in the same piece of memory, which can cause memory corruption. We can use the isolation queue for thread safety. Let's create an identity Map

 class  identitymap<t:identifiable> {var dictionary  = dictionary<string, T> () func  object  (Forid id:string)-T?  return  Dictionary[id] as  T? } func addobject ( object  : T) {dictionary[ object . ID] = object 

This object is basically a dictionary wrapper. If our method AddObject is called by multiple threads at the same time, it can damage memory because these threads handle the same reference. This is called readers-writers problem. In short, we can have multiple readers reading at the same time, but only one thread can write at any given time. Fortunately, GCD gives us a good tool to deal with this situation. We can use the following four types of APIs:

    • Dispatch_sync

    • Dispatch_async

    • Dispatch_barrier_sync

    • Dispatch_barrier_async

Our ideal scenario is that the read synchronization, while the write can be asynchronous, must be unique when referencing the object. GCD's barrier API set can do something special: they have to wait until the queue is completely empty before they execute the block. The use of the barrier API to write dictionary writes will be limited so that we never have any writes occurring at the same time, either read or write.

classIdentitymap<t:identifiable>{var dictionary= Dictionary<string, t>() Let Accessqueue= Dispatch_queue_create ("Com.khanlou.isolation.queue", Dispatch_queue_concurrent) funcObject(Withid id:string)-T?{var result:t? =Nil Dispatch_sync (accessqueue, {result= Dictionary[id] asT?        })        returnresult} func AddObject (Object: T) {Dispatch_barrier_async (Accessqueue, {dictionary[Object. ID] =Object        })    }}

Dispatch_sync adds the block to our quarantine queue and waits for it to execute before returning. In this way, we will have the results of our simultaneous reading. (If we do not synchronize, our Getter method may require a completed block.) Because the accessqueue is concurrent, these synchronous reads can occur simultaneously. Dispatch_barrier_async adds a block to the quarantine queue. This async part means that it will actually execute the block before returning (performing a write operation). This is good for our performance, but one drawback is that performing a "read" Operation immediately after the "write" operation may result in getting old data before the change. The barrier part of this dispatch_barrier_async means that it will wait until each block in the current run queue is completed before execution. The other blocks will be queued behind it and executed when the barrier dispatch is complete.

Summarize

The Grand Central Dispatch is a framework with many underlying languages. Using them, this is one of the more advanced techniques I can build. If there are other advanced uses of GCD that you use and I don't list here, I like to hear them and add them to the list.

GCD User's Manual for iOS developers

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.