GCD Use summary of iOS development

Source: Internet
Author: User
Tags gcd

GCD is one of the underlying multithreading mechanisms of iOS, and today summarizes the common APIs and concepts of GCD, hoping to help everyone learn.

The concept of the GCD queue

In multithreaded development, programmers simply define what they want to do and append it to the Dispatchqueue (dispatch queue).

The dispatch queue is divided into two kinds, one is serial queue (Serialdispatchqueue) and one is parallel queue (concurrentdispatchqueue).

A task is a block, for example, the code that adds the task to the queue is:

1 Dispatch_async (queue, block);

When adding multiple tasks to a queue, if the queue is a serial queue, they are executed sequentially, with only one task being processed.

When a queue is parallel, no matter whether the first task ends or not, the following tasks are immediately started, that is, multiple tasks can be performed concurrently.

However, the number of tasks performed in parallel depends on the XNU kernel, which is not controllable. For example, if you perform 10 tasks at the same time, then 10 tasks do not open 10 threads, and threads are reused based on task execution and controlled by the system.

Get queue

The system provides two queues, one is Maindispatchqueue, and the other is globaldispatchqueue.

The former inserts the task into the runloop of the main thread, so it's obviously a serial queue that we can use to update the UI.

The latter is a global parallel queue, with high, default, low, and background 4 priority levels.

They are obtained in the following ways:

1 dispatch_queue_t queue = Dispatch_get_main_queue (); 2 3 dispatch queue_t queue = Dispatch_get_global_queue (dispatch_queue_prority_default, 0)

Performing Asynchronous tasks

1     dispatch_queue_t queue = dispatch_get_global_queue (Dispatch_queue_priority_default, 0); 2     Dispatch_async ( Queue, ^{3         //... 4     });

This code fragment executes a task block directly in the child thread. Use the GCD mode task to start executing immediately

It does not start manually like the operations queue, and the disadvantage is its controllability.

Make a task execute only once

1 + (ID) shareinstance {2     static dispatch_once_t Oncetoken; 3     dispatch_once (&oncetoken, ^{4         _ Shareinstance = [[Self alloc] init]; 5     }); 6}

This one-time, thread-safe approach often occurs in a singleton constructor.

Task Group

Sometimes we want multiple tasks to be executed at the same time (in multiple lines thread), and then after they are all done, then perform other tasks.

You can then create a grouping that allows multiple tasks to form a group, and the following code performs subsequent tasks after multiple tasks in the group have been completed:

1     dispatch_queue_t queue = dispatch_get_global_queue (Dispatch_queue_priority_default, 0);  2     dispatch_group_t group = Dispatch_group_create ();  3      4     dispatch_group_async (group, queue, ^{NSLog (@ "1");});  5     Dispatch_group_async (group, queue, ^{NSLog (@ "2");});  6     Dispatch_group_async (group, queue, ^{NSLog (@ "3");});  7     Dispatch_group_async (group, queue, ^{NSLog (@ "4");});  8     Dispatch_group_async (group, queue, ^{NSLog (@ "5");});  9     dispatch_group_notify (Group, Dispatch_get_main_queue (), ^{NSLog (@ "Done");});

Delaying the execution of a task

1     dispatch_after (Dispatch_time (Dispatch_time_now, (int64_t) (nsec_per_sec)), Dispatch_get_main_queue (), ^{ 2         //... 3     });

This code will insert the task into Runloop after 10 seconds.

DISPATCH_ASYCN and Dispatch_sync

There has been a previous example of using Dispatch_async to perform asynchronous tasks, and here's a snippet of code:

1     dispatch_queue_t queue = dispatch_get_global_queue (Dispatch_queue_priority_default, 0); 2     3     Dispatch_ Async (Queue, ^{4         NSLog (@ "1"); 5     }); 6     7     NSLog (@ "2");

This code first acquires the global queue, that is, the task in the Dispatch_async is thrown into another thread to execute, and what async means here is that the current thread executes immediately, without blocking, when the front-end thread assigns a task in the block. That is, asynchronous. Well, the output is either 12 or 21, because we can't control how the two threads in the Runloop are executed.

Similarly, there is a "synchronous" method Dispatch_sync, the code is as follows:

1     dispatch_queue_t queue = dispatch_get_global_queue (Dispatch_queue_priority_default, 0); 2     3     Dispatch_ Sync (queue, ^{4         NSLog (@ "1"), 5     }), 6     7     NSLog (@ "2");

This means that when the main thread assigns the task to a child thread, the main thread waits for the child thread to complete before continuing to execute its own content, so the result is obviously 12.

One thing to note is that the global queue is used here, so what happens if you switch the Dispatch_sync queue to the main thread queue:

1     dispatch_queue_t queue = Dispatch_get_main_queue (), 2     dispatch_sync (queue, ^{3         NSLog (@ "1"); 4     });

This code will cause a deadlock because:

1. The main thread passes the block to the home team via Dispatch_sync, and waits for the end of the block to go down to its own task.

2. While the queue is first-out, the task in block is waiting for the team to run before it is finished.

This cycle of waiting creates a deadlock. It is not advisable to use Dispatch_sync in the main thread to add tasks to the main queue.

Create a queue

We can use the system-provided functions to get the main serial queue and the global parallel queue, of course, you can manually create the serial and parallel queues, the code is:

1     dispatch_queue_t myserialdispatchqueue = dispatch_queue_create ("com. STEAK.GCD ", dispatch_queue_serial); 2     dispatch_queue_t myconcurrentdispatchqueue = dispatch_queue_create ("com. STEAK.GCD ", dispatch_queue_concurrent);

In the MRC, manually created queues are required to be freed

1     dispatch_release (myconcurrentdispatchqueue);

Manually created queues and default priority global queue precedence equals, if you need to modify the priority of the queue, you need to:

1     dispatch_queue_t myconcurrentdispatchqueue = dispatch_queue_create ("com. STEAK.GCD ", dispatch_queue_concurrent); 2     dispatch_queue_t targetqueue = dispatch_get_global_queue (dispatch_queue_priority_background, 0); 3     Dispatch_set_target_queue (Myconcurrentdispatchqueue, targetqueue);

The above code modifies the queue's priority to the background level, which is equivalent to the default background priority of the global queue.

Serial, parallel queue and read/write security

When multiple block tasks are added to the serial queue (Serialdispatchqueue), only one block can be executed at a time, and if n serial queues are generated and tasks are added to each queue, the system initiates n threads to perform these tasks concurrently.

For a serial queue, the right time to use it is when you need to solve the data/file competition problem. For example, we can have multiple tasks accessing a piece of data at the same time, which can cause conflicts or add each operation to a serial queue, because the serial queue can perform only one thread at a time, so there is no conflict.

However, given that the serial queue slows down system performance because of context switching, we would still expect a parallel queue to look at the following sample code:

1     dispatch_queue_t queue = dispatch_get_global_queue (Dispatch_queue_priority_default, 0);  2     dispatch_async (queue, ^{  3         //Data read  4     });  5     dispatch_async (queue, ^{  6         //Data read 2  7     });  8     dispatch_async (queue, ^{  9         //Data write     );     dispatch_async (queue, ^{         //Data read 3 13     });     dispatch_async (queue, ^{         //Data read 4     });

Obviously, the execution order of these 5 operations is beyond our expectation, we want to write after reading 1 and read 2 execution, and then read 3 and read 4 after the write is completed.

To achieve this effect, another API for GCD can be used:

1     dispatch_barrier_async (queue, ^{2         //Data write 3     });

This guarantees concurrency security for the write operation.

Parallel operations with no data contention can be implemented using parallel queues (CONCURRENT).

Join behavior

CGD uses dispatch_group_wait to implement join behavior for multiple operations, with the following code:

1     dispatch_queue_t queue = dispatch_get_global_queue (Dispatch_queue_priority_default, 0);  2     dispatch_group_t group = Dispatch_group_create ();  3       4     dispatch_group_async (group, queue, ^{  5         Sleep (0.5);  6         NSLog (@ "1");  7     });  8     Dispatch_group_async (group, queue, ^{  9         Sleep (1.5),         NSLog (@ "2")     ; Dispatch_group_async (group, queue, ^{         sleep (2.5),         NSLog (@ "3"), +     NSLog (@ " AAAAA ");      dispatch_time_t Time     = Dispatch_time (Dispatch_time_now, 2ull * nsec_per_sec),     if (dispatch_ Group_wait (group, time) = = 0) {         NSLog (@ "has been fully executed"), and all of the other     {         NSLog (@ "not completed     ");     NSLog (@ "bbbbb");

Here are 3 asynchronous threads placed in a group, then through dispatch_time_t created a timeout time (2 seconds), after the program line, immediately output the AAAAA, which is the main thread output, when encountered dispatch_group_wait, the main thread will be suspended, Wait 2 seconds, in the process of waiting, the child thread output 1 and 2, 2 seconds time reached, the main thread found that the task in the group did not complete, and then output the BBBBB.

Here, if the time-out is set to longer (for example, 5 seconds), then the third task ends at 2.5 seconds, and the bbbbb is output immediately, that is, when all the tasks in the group are completed, the main thread is no longer blocked.

If you want to wait forever, the time can be set to Dispatch_time_forever.

Parallel loops

Plinq,oc similar to C # can also allow loops to execute in parallel, with a dispatch_apply function in GCD:

1     dispatch_queue_t queue = dispatch_get_global_queue (Dispatch_queue_priority_default, 0); 2     dispatch_apply ( , queue, ^ (size_t i) {3         NSLog (@ "%lu", I); 4     });

This code let I loop 20 times in parallel, if the internal processing is an array, you can implement the parallel loop of the array, its internal is dispatch_sync synchronous operation, so in the process of executing this loop, the current thread will be blocked.

Pause and resume

Use Dispatch_suspend (queue) to pause the execution of a task in the queue and use Dispatch_result (queue) to continue the paused queue.

  All say the programmer's high wages, but very little understanding of their overtime pain, you are not every time in the mind, according to the time to reduce the wages are less, so will want to cry in the heart, or raise wages, or raise wages, or raise wages, why?? Because don't let us work overtime, this is impossible!!!

Want to subvert your work model? Want to reduce your overtime? Join us and explore the free mode of our programmers!

A native app for programmers to share knowledge and skills as a reward for online interactive interactive platform.

We have a top technical team of nearly 20 people, as well as excellent product and operations teams. Team leaders have more than 10 years of experience in the industry.

Now that we are recruiting the original heroes, you will be working with us to change the way programmers work and change the world of programmers! There will also be generous rewards. As our original participant, you will experience this programmer artifact with us, you can offer professional advice, we will adopt it humbly. Everyone will be a hero, and you will be the hero we need! You can also invite your friends to participate in this heroic recruiting interaction.

We will not delay you too much time, we only need your professional opinion, as long as you take 1 hours from one months, you can save two hours a day, everything is for our own!

To? Or not?

Connector Person code: 1955246408 (QQ)

GCD Use summary of iOS development

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.