IOS multithreaded Programming GCD Comprehensive System understanding

Source: Internet
Author: User
Tags gcd

These two days in the "OC Advanced programming-multithreaded programming and Memory Management" written by the Japanese, the book has a deeper interpretation of arc,block and gcd, very good. Now summarize the knowledge about GCD. Reference ARC reference block for ARC and block

Many blogs on the internet have explained to GCD, many are GCD global queue, main thread queue, create queue and so on, do a unilateral description, not very comprehensive system. Below we will learn the system to GCD. This article is mainly divided into the following points, the first few better understanding, the last may be a bit difficult to understand!

What is Gcd,ios why use multithreading

Creating threads, sequence threads, and concurrent threads

System default of five queues

Other interfaces of the GCD

Implementation of GCD and dispatch source

Let's start with the first point

1. What is GCD

GCD is one of the techniques for performing tasks asynchronously. The code for thread management described in the application is generally implemented at the system level. Developers only need to define the tasks they want to perform and append them to the appropriate dispatch queue, GCD can generate the necessary threads and plan to perform the task. Because thread management is implemented at the system level. So you can manage it uniformly and perform tasks that are more effective than previous threads-from Apple's official documentation.

Before the GCD appeared, there were performselector and Nsthread. But performselector than Nstread to simple, gcd than performselector more simple, at a glance.

A thread is defined in this book: 1 CPU commands executed by CPUs are listed as "threads" without a fork path, such as:


Multithreading is a program in which there are several such non-forked paths, such as


But multithreading is a technology that is prone to various problems, such as data competition, deadlock, thread-intensive memory, and so on. Although the problem is very easy, you should use multithreading. Because multithreading can guarantee the responsiveness of your application.

When an app is launched in iOS, the first thread that executes is the main threading, which is used to draw the UI and touch the screen's events. If long processing takes place in the main thread, it interferes with the execution of the main thread, causing the UI to be stuck. Such as


2. Create a thread queue

In general, you do not need to manually create thread queues because the system prepares 2 teams for us (see next point).

This is to illustrate the dispatch queue, which is the waiting line to perform processing. There are two types of Dispatch queue, one is the serial Dispatch queue, and the other is the concurrent Dispatch queue. Both are well understood, the former is a serial queue, a task is executed, and then the next task executes. The Concurrent Dispatch queue is a concurrent


Take a look at the following code:

    dispatch_queue_t gcd = Dispatch_queue_create ("This is a sequence queue", NULL);    dispatch_queue_t gcd = Dispatch_queue_create ("This is a concurrent queue", dispatch_queue_concurrent);    Dispatch_async (GCD, ^{nslog (@ "B0");});    Dispatch_async (GCD, ^{nslog (@ "B1");});    Dispatch_async (GCD, ^{nslog (@ "B2");});    Dispatch_async (GCD, ^{nslog (@ "B3");});    Dispatch_async (GCD, ^{nslog (@ "B4");});    Dispatch_async (GCD, ^{nslog (@ "B5");});    Dispatch_async (GCD, ^{nslog (@ "B6");});    Dispatch_async (GCD, ^{nslog (@ "B7");});    Dispatch_async (GCD, ^{nslog (@ "B8");});    Dispatch_async (GCD, ^{nslog (@ "B9");});    Dispatch_async (GCD, ^{nslog (@ "B10");});    Dispatch_release (GCD);

Using different queue output results are different. If it is a sequential queue, the output is definitely sequential, and if you use concurrent queues, each time it is different, here is one of the logs:

B1
B0
B4
B3
B2
B5
B6
B7
B8
B9

The reason why the concurrent Dispatch queue can be executed concurrently is because it uses multiple threads, on the above output, the possible scenarios are as follows:


Just now the code has used the Dispatch_queue_create function to look at the Dispatch_queue_create prototype:

Dispatch_queue_tdispatch_queue_create (const char *label, dispatch_queue_attr_t attr);
This is a C-language level function. If the second parameter is null for the sequential queue, the dispatch_queue_concurrent is the concurrent queue. A sequential queue is typically used when a multithreaded update of the same resource results in data contention, and a concurrent queue is used when you want to handle problems such as data contention in parallel.

Note: The Dispatch queue must have a programmer to release. Because arc is not applied to the dispatch queue. You can call Dispatch_release () immediately after create, because the block holds this queue. When the block is complete, the queue is automatically freed.

3. System default of five queues

In fact, the system will create several queues for us, they are the main Dispatch queue and the global Dispatch queue. The system-provided dispatch queue summarizes the following table


Here is the code that gets the global concurrent queue and the main thread queue

    Get global queue    dispatch_queue_t mainq = Dispatch_get_main_queue ();    Get high, medium, low, background priority queue Concurrent Queue    dispatch_queue_t Globalh = dispatch_get_global_queue (dispatch_queue_priority_high, 0);    dispatch_queue_t Globald = dispatch_get_global_queue (dispatch_queue_priority_default, 0);    dispatch_queue_t Globall = dispatch_get_global_queue (dispatch_queue_priority_low, 0);    dispatch_queue_t globalb = dispatch_get_global_queue (dispatch_queue_priority_background, 0);

4. There are still a few other interface interfaces for GCD, some of which are often used, others are less common.

dispatch_set_target_queue--changing the priority of a queue created with Dispatch_queue_create

dispatch_after--delay processing a section of code

dispatch_group--in the concurrent queue, after all the tasks are executed, the code that is called

dispatch_barrier_async--Fence action. You can split a task in a concurrent queue into two parts.

dispatch_sync--synchronous wait, complete execution of the current queue

dispatch_apply--specifies the number of times the specified block is added to the Dispatch_queue, and waits for all processing to finish executing.

dispatch_suspend/dispatch_resume--suspend recovery of the specified thread queue

dispatch_semaphore--can be found in the name "semaphore", the interface is to dispatch_barrier_async fine processing

dispatch_once--code that executes only once. Typically used in a single case

Dispatch i/o--if you want to increase the file read speed, you can try dispatch I/O

Refer to the following code for specific use.

-(void) testgcd{[self testdispatch_target];    [Self testdispatch_after];    [Self testdispatch_group];    [Self testdispatch_barrier];    [Self testdispatch_sync];//The line when it is running, or it will be deadlocked [self testdispatch_apply]; [Self testdispatch_once];} /* Can change Dispatch_queue priority */-(void) testdispatch_target{dispatch_queue_t serial = Dispatch_queue_create ("xxxx", NULL)    ;    dispatch_queue_t Queueg = dispatch_get_global_queue (dispatch_queue_priority_default, 0); Dispatch_set_target_queue (serial, QUEUEG);} /* Testdispatch_after delay added to queue */-(void) testdispatch_after{dispatch_time_t time = Dispatch_time (Dispatch_time_now, 3*N    SEC_PER_SEC);    Dispatch_after (Time, Dispatch_get_main_queue (), ^{NSLog ("added to queue after 3 seconds"); });} /* Dispatch_barrier_async fence function */-(void) testdispatch_barrier{//dispatch_queue_t gcd = Dispatch_queue_create ("This is the sequence queue    ", NULL);    dispatch_queue_t gcd = Dispatch_queue_create ("This is a concurrent queue", dispatch_queue_concurrent);    Dispatch_async (GCD, ^{nslog (@ "B0");}); DisPatch_async (GCD, ^{nslog (@ "B1");});    Dispatch_async (GCD, ^{nslog (@ "B2");});    Dispatch_async (GCD, ^{nslog (@ "B3");});    Dispatch_async (GCD, ^{nslog (@ "B4");}); Dispatch_barrier_async (GCD, ^{nslog (@ "barrier");    /dispatch_barrier_async Dispatch_async (GCD, ^{nslog (@ "B5");});    Dispatch_async (GCD, ^{nslog (@ "B6");});    Dispatch_async (GCD, ^{nslog (@ "B7");});    Dispatch_async (GCD, ^{nslog (@ "B8");});    Dispatch_async (GCD, ^{nslog (@ "B9");});    Dispatch_async (GCD, ^{nslog (@ "B10");}); Dispatch_release (GCD);} /* Dispatch_sync. Three operations */-(void) testdispatch_sync{//1. Synchronous wait dispatch_queue_t Queueg = Dispatch_get_global_queue (DI    Spatch_queue_priority_default, 0);    Dispatch_sync (Queueg, ^{nslog (@ "Dispatch_sync synchronous Wait");}); 2.    Deadlock dispatch_queue_t mainq = Dispatch_get_main_queue ();    Dispatch_sync (Mainq, ^{nslog (@ "Dispatch_sync synchronous wait, so write is a deadlock");}); 3. Likewise is the deadlock Dispatch_sync (Mainq, ^{dispatch_sync (Mainq, ^{nslog (@ "Dispatch_sync synchronous wait, same as deadlock");});} /* Demo of Dispatch Group*/-(void) testdispatch_group{dispatch_queue_t mainq = Dispatch_get_main_queue ();    dispatch_queue_t Queueg = dispatch_get_global_queue (dispatch_queue_priority_default, 0);    dispatch_group_t group = Dispatch_group_create ();    Dispatch_group_async (Group, Queueg, ^{nslog (@ "dispatch group Blk1");    Dispatch_group_async (Group, Queueg, ^{nslog (@ "dispatch group Blk2");    Dispatch_group_notify (Group, MAINQ, ^{nslog (@ "dispatch Group"); Dispatch_release (group);} /* Appends the specified block to the specified dispatch queue by the specified number of times. */-(void) testdispatch_apply{dispatch_queue_t Queueg = dispatch_get_global_queue (dispatch_queue_priority_default, 0    );    Dispatch_apply (Queueg, ^ (size_t i) {NSLog (@ "%zu", I);});    NSLog (@ "Done");    The classic approach is to loop an array of nsarray* array = [Nsarray arraywithobjects:@1,@2,@3, Nil];        Dispatch_apply ([array Count], Queueg, ^ (size_t i) {NSLog (@ "%ld", [Array[i] integervalue]); ;});}    /* Execute once */-(void) testdispatch_once{static dispatch_once_t p; Dispatch_oncE (&p,^{NSLog (@ "testdispatch_once"); ;});}

Dispatch_suspend/dispatch_resume, Dispatch_io, dispatch_semaphore These are not very common, it is not too much to explain

5. Implementation of GCD and dispatch source

In this book, the realization of GCD is not clear, more general and vague. Here are some introductions, the implementation of GCD relies on the following knowledge:

FIFO queue for managing the C-language layer implementation of the Append block

Lightweight signal for exclusive control implemented in the atomic function

Some containers for C-language implementations for managing threads

Of course, in addition to the tools mentioned above, GCD also requires some kernel-level implementations. Some software components in the system level such as: Libdispatch implementation dispatch QUEUE,LIBC (pthreads) Implementation Pthread_workqueue,xnu kernel implementation workqueue.

The GCD all APIs used by programmers are included in the C language functions in the Libdispatch library. The dispatch queue implements a FIFO queue through structs and lists that manage this additional block.

The block is not appended directly to the FIFO, but is first added to the dispatch continuation dispatch_continuation_t type struct, and then the FIFO queue. Dispatch continuation is used to record some information that the block belongs to, similar to the execution context.

The following sections of this book describe the global dispatch queue, LIBC pthread_workqueue, and Xnu workqueue. The meaning of the book is that it is called sequentially.

Let's talk about dispatch source.

GCD In addition to the dispatch queue, there are less compelling dispatch source. It is the BSD system is used to function kqueue packaging. Kqueue is a technique that performs processing in the application programmer when various events occur in the XNU kernel. Its CPU load is small, try not to occupy resources. Kqueue is the best of the various event methods that occur in an application's processing of the XNU kernel.

Dispatch source can be canceled, and dispatch queue cannot be canceled.

IOS multithreaded Programming GCD Comprehensive System understanding

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.