Exploring multi-thread programming in iOS development: Grand Central Dispatch details, multi-thread programming dispatch

Source: Internet
Author: User
Tags sleep function

Exploring multi-thread programming in iOS development: Grand Central Dispatch details, multi-thread programming dispatch

I have published a blog about multithreading development in iOS, which introduces NSThread, Operation queue, and GCD. Today, we will give a comprehensive summary of how GCD is used. The history and benefits of GCD are not described too much here. This blog will summarize GCD through a series of examples. GCD is very important in iOS development, and there are many application scenarios. GCD is basically used to process some time-consuming tasks, in use, we also need to focus on thread security and deadlock.

This blog gives a comprehensive summary of the GCD Technology in iOS. The simulator below is what we will introduce today, all about GCD. Each Button in the lower View Controller uses GCD technologies to execute different contents. This blog will give a detailed explanation of each technical point used. For ease of understanding, we will also provide schematics which are created based on the examples in this blog and cannot be seen elsewhere.

  

Each of the buttons above corresponds to a batch of code. The above is one of our blogs. Next we will introduce each batch of code in detail. Through these introductions, you should have a more comprehensive and detailed understanding of GCD. It is recommended that you refer to the introduction below and implement the Code by yourself. This effect is very good. All the code in this blog will be shared on github. The github sharing address will be provided only after this blog. In fact, this blog can be used as a reference manual for your GCD. Although this blog does not cover all GCD items, some of them are frequently used. To put it bluntly, go to the topic of today's blog. Next we will introduce a Button and a Button.

 

1. encapsulation of common GCD Methods

To facilitate the implementation of instances, we first encapsulate and extract some common GCD methods. This part is prepared for the specific instance below, and this Part encapsulates some methods that are public in the following example. Next, we will gradually introduce each extracted function to prepare for the implementation of the following example. Before encapsulating the method, it should be noted that our tasks in GCD are executed in different threads in the queue, we need to understand that our task is placed in the Block in the queue, and then the Block completes our task in the corresponding thread.

As shown in, three blocks are stored in the lower queue, each of which corresponds to a task. These tasks are executed in the corresponding thread according to the queue features. Queues can be divided into parallel queues (Concurrent Qeueu) and Serial queues (Serial Queue). queues can be Synchronize and Asynchronize ), A detailed analysis and introduction will be provided later. We need to know the basis of The GCD queue.

  

 

1. Obtain the sleep state between the current thread and the current thread

First, we encapsulate the methods of getting the current thread, because sometimes we often check that our tasks are executed in those threads. Here we use the currentThread () method of NSThread to obtain the current thread. The getCurrentThread () method below is the method we extract to get the current thread. The method content is relatively simple, so I will not repeat it too much here.

  

The above code snippet is the method to get the current thread, and then we need to implement a method to sleep the current thread. In this example, the current thread is often used to sleep for a period of time to simulate time-consuming operations. The currentThreadSleep () function in the code snippet below is the sleep function of the current thread we extracted. This function has an NSTimeInterval parameter, which is the time to sleep. NSTimeInterval is actually an alias of the Double type, so we need to input a sleep time of the Double type when calling the currentThreadSleep () method. You can also call the sleep () method to sleep the current thread, but note that the sleep () parameter is a UInt32 integer. Below is the function for hibernation of the current thread.

  

 

2. Obtain the main and global queues

The getMainQueue () function encapsulated below is used to obtain the main queue column, because sometimes after processing time-consuming tasks (such as network requests) in other threads, UI needs to be updated in the main queue column. We know that there is a concept of RunLoop in iOS. In iOS, touch events and screen refreshes are all done in RunLoop. Because the topic of this blog is GCD, I will repeat RunLoop too much here. If you are not familiar with RunLoop, then you can simply understand RunLoop as a 1/60 execution cycle. Of course, the real RunLoop is much more complicated than a pure loop, in the future, if you have the opportunity to update a blog with the topic RunLoop. To put it bluntly, the lower part is the way to get the main columns. To put it simply, we need to get the main columns because we want to update the UI.

  

Next, we will encapsulate a function to obtain the Global Queue. Before encapsulating the function, let's talk about what a Global Queue is. A global queue is a queue provided by the system. It can be taken over by the queue. According to the execution method, a global queue should be called a parallel queue. The following section describes the concept of a series-parallel queue. When obtaining a global queue, we need to know the priority of the queue. The queue with a higher priority will be executed first. Of course, the priority here is not absolute. The actual execution sequence of the queue also needs to be determined based on the current status of the CUP. Most of them are executed based on the queue priority you specify, but there are exceptions. The instances below will provide a detailed introduction. Below is the function for getting global queues. When getting global queues, we specify a priority for global queues. The default value is DISPATCH_QUEUE_PRIORITY_DEFAULT.

  3. Create a serial queue and a parallel queue

Because we will create some parallel queues and serial queues when implementing instances, We need to extract the creation of parallel queues from the creation of serial queues. GCD calls the dispatch_queue_create () function to create the thread we want. The dispatch_queue_create () function has two parameters. The first parameter is the queue identifier, which indicates the queue object you created. Generally, the domain name is written in the form of "cn. zeluli. The second parameter is the type of the created queue. DISPATCH_QUEUE_CONCURRENT indicates that the created queue is a parallel queue. DISPATCH_QUEUE_SERIAL indicates that you created a serial queue. For the difference between the two, we will give a detailed introduction in the following example.

  

 

 

Ii. synchronous and asynchronous execution

Synchronous execution can be divided into synchronous execution of the serial queue and synchronous execution of the parallel queue, while asynchronous execution can be divided into asynchronous execution of the serial queue and asynchronous execution of the parallel queue. It may sound a bit difficult. You will understand this concept well without the illustration below. The last part is our preparation, and the next part is our real theme. In the first part, we implement the function of obtaining the current thread, sleeping on the current thread, obtaining the main queue and global queue, and creating parallel queues and serial queues. In this section, we will use the above functions to further discuss synchronous execution of serial queues and parallel queues, as well as Asynchronous execution of serial queues and parallel queues. The difference between synchronous execution and asynchronous execution is also given.

Before talking about Synchronous execution and asynchronous execution, let's talk about the differences between the Serial Queue and the parallel Queue. Both Serial Queue and Concurrent Queue are queues. As long as queues follow the First In First Out (First In First Out) Rules, queues are queued, of course, who came first and who left first. However, in the Serial Queue, the next task can be executed only after the previous task is out of the Queue and completed. Concurrent Queue is not the case. As long as the task in front of the Queue is out of the Queue and there is a spare thread, the next task can be out of the Queue regardless of whether or not the previous task has been completed.

We can compare the problem of serial queue and parallel queue with that of bank-run business queue. For example, if you are arranging window 1 in the serial queue, you must wait for the previous person to complete the business in window 1 before you can go to window 1 to run your business, even if other windows are empty, you cannot proceed because you choose serial queue. However, if you are in a parallel queue, as long as the previous person runs the business in the window, you do not need to worry about whether the business of the previous person has been completed, if other windows are empty, you can handle your business. To sum up, the serial queue is to identify a thread, and the channel goes to the black, so it is more focused. The parallel queue can be used by other threads, which is more flexible and not a perfect spot. Next, let's take a look at the different execution methods of the two queues.

1. synchronous execution

First, we will introduce synchronous execution. The main examples of synchronous execution correspond to the buttons "synchronous execution serial queue" and "synchronous execution parallel queue. The Serial Queue can be executed synchronously, and the Concurrent Queue can also be executed synchronously. Let's leave the queue aside and see how the code is synchronized. The following functions encapsulate the synchronization tasks. Synchronous execution is performed using the dispatch_sync () method. In the following function, three Block execution blocks are added to the queue (queue) in synchronous mode through the for-in loop. The parameter of the function is the queue type (dispatch_queue_t). You can pass the function into the serial queue and parallel queue.

  

 

That is to say, to synchronously execute the serial queue, the function is passed into the object of the serial queue. If you want to synchronously execute the parallel queue, it is passed into the parallel queue object. Now we use the encapsulated method for creating a serial queue and a parallel Queue (see section 1 ). The code segment below is what you do by clicking the "synchronous execution serial queue" and "synchronous execution parallel queue" buttons. When you click the "synchronous execution serial queue" button, an object in the serial queue is created and passed to the above synchronous execution function (synchronized mqueuesusesynchronization ()), when you click the "synchronous execution parallel queue" button, an object in the parallel queue is created to the above function.

  

 

Below is the result of clicking the two buttons. The red box shows the result of synchronous execution of the serial queue. It can be seen that the queue is executed in FIFO order under the current thread (main thread. In the green box, the running results of the parallel queue are synchronously executed. It is not difficult to see from the results that the results are consistent with those in the red box and are also executed in the FIFO order of the current thread.

If the current thread is the main thread, it will block the main thread, because the main thread is blocked, it will cause the UI to become stuck. Because synchronous execution is a task executed in the current thread, that is to say, there is only one thread available for the queue now, so the results of synchronous execution in the serial queue and the parallel queue are the same, you can only execute the next task after the previous task is out of the queue and completed. We can use this feature of synchronous execution to apply synchronization locks to some code blocks. The following figure shows the above Code and execution result.

 

2. asynchronous execution

Next we will look at asynchronous execution. asynchronous execution is also divided into asynchronous execution of serial queues and asynchronous execution of parallel queues. Use the dispatch_async () function in GCD for asynchronous execution. The parameters of the dispatch_async () function are the same as those of the dispatch_sync () function. However, the asynchronous execution of dispatch_async () will not be executed in the current thread, it will open up new threads, so asynchronous execution will not block the current thread. The code segment below is the encapsulated asynchronous execution function, which is mainly used for the dispatch_async () function. Below, we put the three output statements of the Block in the queue for sequential output, in a synchronous queue for execution, so that these three output statements can be executed in sequence.

  

 

(1) asynchronous execution of serial queues

With the above function, we can pass the object of the Serial Queue to the above function to observe the asynchronous execution result of the Serial Queue. Corresponding to the "Asynchronous execution serial queue" button in the first one, below is the method for clicking this button. In the method of clicking this button, we call the receivmqueuesuseasynchronization () method and pass in a serial queue. That is, asynchronous execution of the serial queue.

  

Click the button to execute the above method. below is the result output in the console after clicking the button, that is, "Asynchronous execution of serial queue. From the output results, we can easily see that asynchronous execution does not block the current thread. Use dispatch_saync () to open up a new thread (thread number = 3) to execute the content in the Block. Things outside the Block content are still executed in the previous thread (main_thread in this example. From the result below, the task of the main thread is finished after the for loop is executed, and the content in the Block is handed over to the newly opened thread 3 for execution.

  

 

Based on the above output, we can draw an analysis diagram of the asynchronous execution of the serial queue below. If asynchronous execution is used in a serial queue in thread 1, a new thread 2 is opened to execute the Block task in the queue. In the newly opened thread, it is still FIFO, And the execution order is to wait until the previous task is completed before it starts to execute the next task. As shown below.

(2) asynchronous execution of parallel queues

Next we will discuss the asynchronous execution method of parallel queues. In fact, the combination of parallel queues and asynchronous execution modes can greatly improve the efficiency, because multiple threads are opened up to execute tasks in parallel queues at the same time when asynchronous execution of parallel queues is used. For example, if 10 threads are opened, the asynchronous queue generates 10 tasks according to the FIFO order. These 10 tasks will be executed in different threads, the order of execution of each task depends on the complexity of each task. Asynchronous queue is characterized by the availability of threads, the task will be executed in the queue, regardless of whether the previous tasks (blocks) in the queue have been completed. The method below is the method called by clicking the asynchronous execution parallel queue button. This method calls the receivmqueuesuseasynchronization () function and passes in an object in the parallel queue.

  

Click the button to execute the above method, and the parallel queue will be executed asynchronously. The following result is the output result after asynchronous execution of the parallel queue. Let's analyze the output result. The first red box below is the sequence of tasks in the parallel queue, from the beginning to the end is 0, 1, 2, followed by the results output after each task is executed. From the execution result, we can see that the execution order is 2, 1, and 0. Each task will be executed in a new thread. If you click the button, the execution order may be 2, 0, 1, or other, therefore, in asynchronous execution of parallel queues, the end time of each task is mainly determined by the complexity of the task.

  

Based on the above execution results, we have drawn the illustration below. When the parallel queue is asynchronously executed, multiple new threads will be opened up to execute the tasks in the queue, and the order of the tasks in the queue from the queue is still FIFO, however, you do not need to wait until the previous task is executed. As long as there is no spare thread, you can use it to execute the queue in FIFO order.

  

 

 

Iii. delayed execution

In GCD, we use the dispatch_after () function to delay tasks in the execution queue. dispatch_after () is used to asynchronously execute tasks in the queue, that is, dispatch_after () to execute tasks in the queue without blocking the current task. After the delay is reached, a new thread will be opened and tasks in the queue will be executed. Note that after the delay is reached, a new thread is opened and tasks in the queue are executed immediately. The following describes how to use the dispatch_after () function.

The following code uses two methods to create a latency. One is to use dispatch_time () to create a latency, and the other is to use dispatch_walltime () to create a latency. The former takes the time of the current device, and the latter goes to the time of the Wall Clock, that is, the absolute time. If the device is sleep, the former goes sleep, the latter is based on the time when the clock does not have the status of the current device. When we create the dispatch_time_t object, we have a parameter NSEC_PER_SEC. From the naming conventions, we can only find out what NSEC_PER_SEC means, that is, how many nanoseconds are contained per second. You can print the value and find that NSEC_PER_SEC = 0000000_000_000. That is, one second equals 1 billion s. If the time below does not multiply by NSEC_PER_SEC, it represents 1 second's time, that is, the time here is calculated by nanoseconds (nanosecond. Below is the code for delayed execution, because the output result of code modification is relatively simple, so I will not repeat it too much here. It should be noted that the delayed execution will be executed in the newly opened queue and will not block the new thread.

  

 

4. queue priority

The queue also has a priority, but its priority is not absolute in most cases because the XUN kernel is not used for GCD in real time, and its priority is generally used to determine the queue's execution priority. The queue has four priorities: High> Default> Low> Background. When obtaining the global queue, we can specify the priority for the obtained queue, and use the dispatch_set_target_queue () function to assign the priority of one queue to another queue. Below, we first specify a priority for the global queue and assign it to another queue.

1. Specify a priority for the global queue

This section corresponds to the "set priority of global queue" button. Click this button to obtain four global Queues with different priorities. Then, global queues are executed asynchronously, finally, observe the execution result. Below is the function to be executed by clicking this button. I first obtained four global Queues with different priorities, then executed them asynchronously and printed the execution results.

  

 

The running result of the above Code is as follows. Although the Code with a higher priority is put at the end of the Code for asynchronous execution, it is printed first. The order of printing is Hight-> Default-> Low-> Background. The order of printing is the execution order. From the order of printing, it is not difficult to see that the first with a higher priority is executed. Of course this is not absolute.

 

2. Specify a priority for the self-created queue

In GCD, you can use the dispatch_set_target_queue () function to specify the priority for the queue you have created. This process also requires our global queue. In the code snippet below, we first create a serial queue, and then assign the high priority value in the global queue to the serial queue we just created through the dispatch_set_target_queue () function, as shown below.

  

 

5. Task Group dispatch_group

GCD task groups are often used during development. When you execute some operations after a group of tasks ends, it is not appropriate to use the task group. The role of dispatch_group is to perform some operations after all tasks in the queue are completed, that is, the queue executed in the task group, when all tasks in the queue are completed, a notification is sent to notify the user that the tasks in the queue in the Task Group have been completed. There are two ways to put a queue into a Task Group for execution. One is to use the dispatch_group_async () function to associate the queue with the task group and automatically execute the tasks in the queue. Another method is to manually associate the queue with the group and then run the queue asynchronously, that is, the use of the dispatch_group_enter () and dispatch_group_leave () methods. The following is a detailed introduction.

1. the queue and group are automatically associated and executed.

First, we will introduce how to use the dispatch_group_async () function. This function will associate the queue with the corresponding task group and automatically execute it. After all tasks in the queue associated with the task group are executed, a notification is sent through the dispatch_group_notify () function to notify the user that all tasks in the Task Group have been executed. The notification method does not block the current thread. If you use the dispatch_group_wait () function, the current thread is blocked until all tasks in the task group are completed.

The encapsulated function below is to use the dispatch_group_async () function to associate and execute the queue with the task group. First, we created a concurrentQueue parallel queue and a task group of the dispatch_group_t type. Use the dispatch_group_async () function to associate and execute the two. Use the dispatch_group_notify () function to listen to the execution results of the queue in the group. If the execution is complete, we will process the results in the main thread. The dispatch_group_notify () function has two parameters: one is the group that sends the notification, and the other is the queue that processes the returned results.

  

 

The output result of calling the above function is as follows. From the output results, we can easily see that the execution of tasks in the queue and the processing of notification results are asynchronous and will not block the current thread. After all tasks in the task group are processed, the closure block in dispatch_group_policy () will be executed in the main thread.

  

 

2. Manually associate a queue with a task group

Next, we will manually manage the relationship between the task group and the queue, that is, do not use the dispatch_group_async () function. We use the dispatch_group_enter () and dispatch_group_leave () functions to add each task in the queue to the task group. First, use the dispatch_group_enter () function to enter the Task Group, execute tasks in the queue asynchronously, and then use the dispatch_group_leave () function to exit the task group. The following function uses the dispatch_group_wait () function. The function is responsible for blocking the current thread and waiting for the task in the task group to be executed. The first parameter of this function is the group to be waited for, and the second parameter is the wait timeout time. Here we set DISPATCH_TIME_FOREVER, which indicates that the wait for the execution of the task group will never time out, until all tasks in the task group are completed.

  

Below is the output result after the above function is executed. The print () function below the dispatch_group_wait () function will not be called before all tasks are executed, because dispatch_group_wait () the current thread is blocked. Of course, although the queue is manually associated with the task group, the display_group_notify () function is still useful. The running result is as follows.

  

 

6. semaphore synchronization lock

Sometimes, when multiple threads operate on a data, we only allow one thread at a time to operate the data for data consistency. To ensure that there is only one thread at a time to modify our resource data, we need to use the semaphore synchronization lock. That is to say, there is a lock behind the door for storing the resource room. When a thread enters this room, the lock will be locked. After the thread modifies the resource, it opens the lock. After the lock is opened, other threads can hold the resource. If you do not lock the resource and other threads wait to use the resource, a deadlock will occur. So when you don't use it, don't hold resources.

In the above process, we can use the semaphore mechanism in GCD. In GCD, there is something called dispatch_semaphore_t. This is our semaphore. We can operate on semaphores. If semaphores are 0, they are locked. Other threads have to wait for resources. If the semaphores are not zero, they are locked and the resources in the locked status can be accessed. The code below is the specific use code of the semaphore.

In the first red box below, a semaphore is created through dispatch_semaphore_create (). This function requires a parameter that specifies the semaphore value. We specify 1 as the signal value. The second red box is the locking process. You can use the dispatch_semaphore_wait () function to operate on semaphores. The first parameter in this function is the semaphores operated, and the second parameter is the waiting time. The dispatch_semaphore_wait () function is used to subtract one from the semaphore. If the semaphore is zero, it locks the resources operated by the current thread. The time for other threads to wait for the current thread to operate on resources is DISPATCH_TIME_FOREVER. That is to say, other threads will wait until the current thread completes the operation on resources. After the current thread completes operations on the resource, call dispatch_semaphore_signal () to add 1 to the semaphore and unlock the resource so that other waiting threads can access the resource. After unlocking, other threads can wait for the end time to access resources.

  

 

7. Queue loop, suspension, and recovery

In Part 7 of this blog, we will talk about the cyclic execution of the queue and the suspension and restoration of the queue. This part is relatively simple, but it is also quite common. We usually use the dispatch_apply () function to execute tasks in the queue repeatedly, but the dispatch_apply () function will block the current thread. If you use the dispatch_apply () function to execute parallel queues, although multiple threads are enabled to execute tasks in parallel queues cyclically, the current thread is still blocked. If you use the dispatch_apply () function to execute the serial queue, no new threads will be opened, and the current thread will be blocked. When it comes to queue suspension and recovery, you can use dispatch_suspend () to suspend the queue and use dispatch_resum () to restore the queue. See the instances below.

1. dispatch_apply () function

The dispatch_apply () function is used to execute tasks in the queue cyclically. The usage is as follows: dispatch_apply (number of cycles, queue of the task) {task to be executed cyclically }. When you use this function to execute tasks in the parallel queue cyclically, a new thread is opened, but some tasks may be executed in the current thread. When dispatch_apply () is used to execute tasks in the serial queue, it is executed in the current thread. Whether using a parallel queue or a serial queue, dispatch_apply () will block the current thread. The following code snippet is an example of dispatch_apply:

  

Below is the running result of the above function. In the result, the thread used for each task execution is printed.

  

 

2. Queue suspension and Wakeup

Queue suspension and wakeup are relatively simple. If you want to suspend the execution of tasks in a queue, you can use the dispatch_suspend () function. If you want to wake up a suspended queue, you can use the dispatch_resum () function. The parameters required for these two functions are the queues you need to suspend or wake up. In view of the simplicity of the knowledge point, we will not repeat them too much. Below is the pending parallel queue for asynchronous execution. After the current thread is sleeping for 2 seconds, the suspended thread is awakened. The Code is as follows:

  

 

8. dispatch_barrier_async ()

As the name implies, the task fence isolates tasks in the queue and enables asynchronous execution of tasks separately. I want to use the figure below to introduce the role of barrier. Let's assume that there is a parallel queue below, and there are four tasks in the parallel queue: 1.1, 1.2, 2.1, and 2.2. The first two tasks are separated from the fence in the middle of the last two tasks. If there is no middle fence, the four tasks will be executed simultaneously asynchronously. However, if a fence is blocked, the task in front of the fence will be executed first. After all the preceding tasks are completed, the built-in Block of the fence will be executed, and the task behind the fence will be executed asynchronously. This is a bit similar to the previous dispatch_group. If we want to do something after executing some column tasks, we can also implement it through dispatch_barrier_async.

  

The following code snippet is dispatch_barrier_async. The code in the red box above is the first batch of tasks executed asynchronously. The middle is the task fence we added to the task queue. One Parameter of dispatch_barrier_asyn () is the queue where the fence is located, the trailing closure is the content in the trailing closure after all the tasks in front of the fence are executed. The lower part of the yellow box is the second batch of tasks that will be executed after the tail closure of the dispatch_barrier_asyn () fence is executed.

  

Next, let's take a look at the running results of the above Code. Click the "use task isolation fence" button in the first part to execute the above method. Below is the running result of the above code snippet. It is not difficult to see from the output below that the tasks before dispatch_barrier_asyn are executed asynchronously, that is, the first batch of tasks below. After the first batch of tasks are completed, the chunks in the fence will be executed in the thread of the last completed task in the first batch of tasks. After a job in the fence is executed, the first job in the second batch in the queue will enter the thread that executes the fence job, and the other jobs will open up new threads. As shown below.

  

We can use a graph to explain how the fence works. The figure shows how the fence works. Note that the last task in the first batch of tasks in the queue and the first batch of tasks in the fence are executed in one thread. This is why the barrier can isolate tasks. From the figure below, we can easily find that Task 1.3, Fence task, and task 2.1 are executed synchronously in thread 5. For details, see.

  

9. dispatch_source

Dispatch_source is flexible in GCD and has powerful functions. In short, the main function of dispatch_source is to listen to objects of some types of events. when an event occurs, the events to be processed are placed in the associated queue for execution. The dispatch source supports event cancellation. We can also process the cancellation event. Below are different types of dispatch sources, because the space is limited here, we will not repeat them too much. refer to these types of data on the Internet. Let's take DATA_ADD, DATA_OR, and TIMER as an example to see how to use source.

  

 

1. DATA_ADD and DATA_OR

DISPATCH_SOURCE_TYPE_DATA_ADD and DISPATCH_SOURCE_TYPE_DATA_OR are similar to adding data sources and performing or operations. We will take the addition as an example, or the operation code will not be given in the blog, but the code we share on github will have a complete example. The following function uses the dispatch source of the DISPATCH_SOURCE_TYPE_DATA_ADD type.

First, we obtain a global queue and create a dispatch source named dispatchSource. When creating a dispatch source, we specify the type for the dispatch source and associate it with a queue. The queue is used to process events in the dispatch source. Then we use dispatch_source_set_event_handler () to specify the processing event for our source. This event will be triggered again under certain conditions. As for the trigger condition, the type of the dispatch source is locked. Because the dispatch source type is DISPATCH_SOURCE_TYPE_DATA_ADD, you can use dispatch_source_merge_data () to trigger the event we specified above. Because the dispatch source is in the suspended state after it is created, we need to use dispatch_resume () to restore the event source we created. The restored dispatch source can listen to conditions to trigger the event.

The following code snippet calls the dispatch_source_merge_data () method in the for loop. During execution, we can also call dispatch_source_cancel () to cancel the dispatch source. When the dispatch source is canceled, the event we set to cancel dispatch_source will be executed. We use dispatch_source_set_candel_handel () to specify the event to be executed to cancel the dispatch source. For dispatch_source cancellation, we will give it in the countdown below.

The dispatch_source type we created here is the Data Add type, that is, when the specified source event is not processed, the next Data will wait. The waiting data is merged using the dispatch_source_merge_data () method. If you create a DISPATCH_SOURCE_TYPE_DATA_ADD type dispatch_source, it will be merged according to the addition. If you create a DISPATCH_SOURCE_TYPE_DATA_OR type dispatch_source, It is merged by the or operation. The merged data triggers the set events together.

  

The above code snippet tests the dispatch source of the DATA_ADD class. We define a variable sum to simulate data merging, and then observe whether the data to be merged is the same as the data calculated in our custom sum. After the merge, sum is set to zero every time an event is executed, and then the next merge is performed. Below is the result of the above Code output. From the following results, we can see that four source events are executed in the above 10 cycles, in addition, the merge Data for each execution event is consistent with the sum recorded manually. This is how DATA_ADD works. The running effect is as follows. The Running Method of Data_Or is not described in detail here.

  

 

2. Timer

There is also a timer type in the dispatch source of GCD. We can create a dispatch source of the timer type, and then set the source event through dispatch_source_set_event_handler. Then, the dispatch_source_set_timer () function is used to specify the time interval for the dispatch_source of the timer type. The first parameter of this function is the dispatch source, and the second parameter is the time interval for triggering the event, the third parameter is the allowable error time. When the number of Countdown times is set, we call dispatch_source_cancel () to cancel dispatch_source. After cancellation, we will execute the trailing closure in the dispatch_source_set_cancel_handel () method.

The following example shows how to use the DISPATCH_SOURCE_TYPE_TIMER type dispatch source for 10 seconds to get the timer. When the event we set is executed 10 times, we will cancel the dispatch_source. For the example below, when the dispatch source is awakened through the dispatch_resume () function, the countdown starts. The timer will end after 10 seconds of countdown.

  

 

The following figure shows the result after the countdown code is executed. From the running results, we can easily see that when the countdown starts, some new threads will be opened up to execute the countdown tasks in sequence. Although you are using a parallel queue, although the threads opened each time may be different, the countdown tasks are executed sequentially,

  

 

Today, the blog has enough content. It should be said that it is still full of work. All the above Code will be shared on github. below is the sharing address. If you have any questions or want to add the QQ group (573884471), the old group is full. If you do not add the group, you will create a new one.

Github share address: https://github.com/lizelu/GCDDemo-Swift

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.