IOS luge Development notes (8) (GCD deadlock and solutions)

Source: Internet
Author: User

IOS luge Development notes (8) (GCD deadlock and solutions)
The reason and solution for GCD deadlock is the so-called deadlock. Generally, two threads A and B are stuck and wait for the other party to complete some operations. A cannot be completed because it is waiting for B to complete. But B cannot be completed because it is waiting for A to complete. As a result, all of us cannot do it, resulting in a DeadLock (DeadLock ).

 

When using GCD, we will put the tasks to be processed into the Block and append the tasks to the corresponding Queue. This Queue is called Dispatch Queue. However, there are two types of Dispatch Queue. One is to wait for the last execution to finish, and then execute the next Serial Dispatch Queue, which is called a Serial Queue; the other is, this means that the next Concurrent Dispatch Queue can be executed without the completion of the previous execution, which is called a parallel Queue. Both of these follow the FIFO principle.

Serial and parallel operations are aimed at queues, while synchronous and asynchronous operations are aimed at threads. The biggest difference is that to block the current thread, the synchronization thread must wait until the task in the synchronization thread is executed and return before continuing to execute the next task. The asynchronous thread does not have to wait.



Case 1: Objective-C
123456 NSLog(@ "1"); // task 1dispatch_sync (dispatch_get_main_queue (), ^ {NSLog(@ "2"); // Task 2 });NSLog(@ "3"); // Task 3

Result:

Objective-C
12 1

Analysis:

  1. Dispatch_sync indicates a synchronization thread;
  2. Dispatch_get_main_queue indicates the main queue column running in the main thread;
  3. Task 2 is a synchronization thread task.

    First, execute Task 1. This is certainly OK. However, if the program encounters a synchronization thread, it will enter the waiting state, wait until Task 2 is finished, and then execute Task 3. However, this is a queue. If there is a task, it will certainly add the task to the end of the team and then follow the FIFO principle to execute the task. Now Task 2 will be added to the end, and task 3 will be placed before Task 2. The problem arises:

    Task 3 can only be executed after Task 2 is executed, and task 2 is placed after Task 3, meaning that task 2 can only be executed after Task 3 is executed, so they enter the mutual waiting situation. [In this case, we can just get stuck here.] This is a deadlock.

    Case 2:

     

    Objective-C
    123456 NSLog(@ "1"); // task 1dispatch_sync (dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^ {NSLog(@ "2"); // Task 2 });NSLog(@ "3"); // Task 3

    Result:

    Objective-C
    1234 123

    Analysis:

    Execute Task 1 first, and then run into a synchronization thread, and the program enters the waiting state. Task 3 can be continued only after Task 2 is completed. From dispatch_get_global_queue, we can see that task 2 is added to the global parallel queue. After Task 2 is executed in the parallel queue, it is returned to the main queue column to continue Task 3.

    Case 3:

     

    Objective-C
    1234567891011 Dispatch_queue_tqueue = dispatch_queue_create ("com. demo. serialQueue ",DISPATCH_QUEUE_SERIAL);NSLog(@ "1"); // task 1dispatch_async (queue, ^ {NSLog(@ "2"); // task 2dispatch_sync (queue, ^ {NSLog(@ "3"); // Task 3 });NSLog(@ "4"); // Task 4 });NSLog(@ "5"); // Task 5

    Result:

    Objective-C
    12345 The order of 152 // 5 and 2 is not necessarily

    Analysis:

    In this case, the serial or parallel queue provided by the system is not used, but a dispatch_queue_create serial queue is created through the DISPATCH_QUEUE_SERIAL function.

    1. Execute Task 1;
    2. When an asynchronous thread is encountered, add Task 2, synchronization thread, and Task 4 to the serial queue. Because it is an asynchronous thread, Task 5 in the main thread does not have to wait for all tasks in the asynchronous thread to complete;
    3. Because Task 5 does not have to wait, the output sequence of Task 2 and Task 5 cannot be determined;
    4. After Task 2 is executed, a synchronization thread is encountered, and task 3 is added to the serial queue;
    5. Task 4 is added to the serial queue earlier than Task 3. Therefore, Task 3 can be executed only after Task 4 is completed. However, the synchronization thread of Task 3 is blocked, so Task 4 must be executed after Task 3 is executed. This leads to infinite waiting, resulting in deadlocks.

      Case 4: Objective-C
      12345678910 NSLog(@ "1"); // task 1dispatch_async (dispatch_get_global_queue (0, 0), ^ {NSLog(@ "2"); // task 2dispatch_sync (dispatch_get_main_queue (), ^ {NSLog(@ "3"); // Task 3 });NSLog(@ "4"); // Task 4 });NSLog(@ "5"); // Task 5

      Result:

      Objective-C
      1234567 The order of 12534 // 5 and 2 is not necessarily

      Analysis:

      First, add Task 1, asynchronous thread, and task 5 to the Main Queue. The tasks in the asynchronous thread are: Task 2, synchronization thread, and Task 4 ].

      Therefore, first execute Task 1 and then add the tasks in the asynchronous thread to the Global Queue. Because of the asynchronous thread, Task 5 does not have to wait, and the result is that the output sequence of Task 2 and Task 5 is not necessarily the same.

      Then, let's look at the task execution sequence in the asynchronous thread. After Task 2 is executed, a synchronization thread is encountered. Add the tasks in the synchronization thread to the Main Queue. Task 3 is after Task 5.

      After Task 3 is executed, the program continues to execute Task 4 without blocking.

      From the above analysis, we can see several results: 1 is executed first; 2 and 5 are not necessarily executed; 4 must be followed by 3.

      Case 5: Objective-C
      123456789101112 Dispatch_async (dispatch_get_global_queue (0, 0), ^ {NSLog(@ "1"); // task 1dispatch_sync (dispatch_get_main_queue (), ^ {NSLog(@ "2"); // Task 2 });NSLog(@ "3"); // Task 3 });NSLog(@ "4"); // Task 4While(1 ){}NSLog(@ "5"); // Task 5
      Objective-C
      1 Result:
      Objective-C
      1234 14 // The Order of 1 and 4 is not certain

      Analysis:

      Similar to the analysis in the above cases, let's take a look at all the tasks added to the Main Queue: [asynchronous thread, Task 4, endless loop, Task 5 ].

      Tasks added to the Global Queue asynchronous thread include: [Task 1, synchronization thread, Task 3 ].

      The first is the asynchronous thread, and Task 4 does not have to wait. Therefore, the sequence of result Task 1 and Task 4 is not necessarily.

      After Task 4 is completed, the program enters an endless loop and the Main Queue is blocked. However, the asynchronous threads added to Global Queue are not affected and the synchronization threads after Task 1 are executed.

      In the synchronization thread, Task 2 is added to the main thread, and task 3 can be executed only after Task 2 is completed. The main thread is blocked by an endless loop. Therefore, Task 2 cannot be executed. Of course, Task 3 cannot be executed, and task 5 cannot be executed after an endless loop.

      In the end, we can only get results in an indefinite order of 1 and 4.


      New users with certain experience in GCD usually think that deadlock is a high-end operating system level problem, far away from me, and generally will not encounter. In fact, this idea is very wrong, because as long as three lines of code are simple (if you want to, or even write it in one line), you can create a deadlock manually.

      Intmain (intargc, constchar * argv []) {
      @ Autoreleasepool {
      Dispatch_sync (dispatch_get_main_queue (), ^ (void ){
      NSLog (@ "deadlock here ");
      });
      }
      Return0;
      }

      For example, the simplest OC command line program will lead to a deadlock and no results will be seen after running.

      Before explaining why a deadlock occurs, first clarify the basic concepts of "synchronous & asynchronous" and "Serial & concurrent:

      Synchronous execution: for example, in dispatch_sync, this function adds a block to the specified queue and waits until the blcok is executed. Therefore, before the block is executed, the thread that calls the dispatch_sync method is blocked.

      The corresponding concept is "Asynchronous execution:

      Asynchronous execution: dispatch_async is generally used. This function also adds a block to the specified queue. However, unlike synchronous execution, after this function adds the block to the queue, it immediately returns the result without waiting for the execution of the block.

      Next, let's take a look at another set of relative concepts: "Serial & concurrency"

      Serial queue: for example, dispatch_get_main_queue. All tasks in this queue must be executed in the order of first arrival and then arrival. In addition, it can ensure that all the tasks that enter the queue before a task are executed. For each different serial queue, the system creates a unique thread for the queue to execute code.

      The opposite is the concurrency queue:

      Concurrent queue: for example, use dispatch_get_global_queue. The tasks in this queue also start to run in the order of first arrival and second arrival. Note that they start to run, but their execution End Time is uncertain, depending on the time consumed by each task. For n concurrent queues, GCD does not create the corresponding n threads but performs proper optimization.

      We regard dispatch_sync as a task. For example, it is a very critical and highly concentrated banknote shipment process. This process is very important. Once the execution starts, it must be done in one breath. Nothing can interfere with this process (blocking the thread ).

      Now the main thread starts to execute this banknote shipping task. When the task is halfway through, the cashier suddenly says that I am tired and have been working hard for a long time. Now I need to rest (add block to the main thread ). The money deliverer naively thinks that I know the money delivery is very important. I should have waited for the money delivery to end before taking a rest (this is serial ). But before that, my physical conditions were not allowed to work.

      But as we have already said, it is very important to ship money. Once it starts, it cannot end (blocking threads ). How can we allow people to take a break in the middle, so they can take a rest (block can be executed), first ship the money to a safe place, and then rest.

      Corresponding to the Code, when we want to synchronously execute this block, we actually tell the main thread that you have finished processing the thing and then come and process my blcok, I have been waiting for you until now. But in the main thread, half of the processing of the dispatch_sync function, this function has not been returned yet. Where can I execute the block when I have time. Therefore, after this code is run, it cannot be returned because it is not stuck in the block, but cannot be executed at all.

      Let's sum up what a deadlock is. First of all, although we have just mentioned the queue and thread, and the corresponding relationship between them, the deadlock must be for the thread. The queue is only the abstract data structure given by GCD. The so-called deadlock must occur between one or more threads. The relationship between deadlocks and thread blocking can be understood as this. Two-way blocking leads to deadlocks. Because blocking is a common occurrence in the thread, the blocking of the main thread affects the user experience. Once two-way blocking occurs, a deadlock occurs. We can see that the main thread is serial, And the thread is blocked when a task is executed, and this task (dispatch_sync) needs to block the main thread during execution, this causes mutual congestion, that is, deadlocks.

      Next, let's think about the situation that will lead to deadlocks. It may be difficult to get an accurate answer to this question at once. To solve this problem, I plan to use exclusion. That is, first, let's see under what circumstances no deadlock will occur. For example, asynchronous block execution will certainly not cause deadlocks. For example, the Code just changed to the following:

      Dispatch_async (dispatch_get_global_queue (0, 0), ^ (void ){
      NSLog (@ "this won't be deadlocked ");
      });

      It can even be concluded that asynchronous execution will not lead to deadlocks. Review the cause of the previous deadlock. It is very important that the main thread is executing dispatch_sync, which is a synchronization method and will not be returned until the block is executed. Since it is an asynchronous execution, it is returned immediately, so it does not block the main thread. Two-way blocking is not valid, but is blocked when the main thread processes blcok, but this does not cause a deadlock.

      According to our analysis and summary, what we need to care about in GCD is synchronous or asynchronous execution, and the queue to which the block is added (serial or concurrent ).

      So next, we only need to focus on when the deadlock will occur during synchronization. You can draw a conclusion that adding a block to the concurrent queue will not cause a deadlock. Review the cause of the previous deadlock. Because block is added to the serial queue, the block will not be executed until the previous task is processed, resulting in a deadlock. Now, even if we add a block to the concurrent queue synchronously, GCD will automatically manage the thread for us. The main thread is currently congested (processing this synchronization method), so we can create a new thread, however, the added block will be executed sooner or later. After all the added blocks are executed, the synchronization method is returned. Therefore, it does not cause deadlocks.

      Finally, let's discuss how to add a block to the serial queue using the synchronous method. In this case, will it cause a deadlock? The answer is not necessarily. In fact, the cause of the deadlock must be:

      In a serial queue, add blocks to the queue synchronously.

      For example, the example at the beginning of the article is the case. Adding Methods to another serial queue synchronously does not necessarily lead to deadlocks. For example:

      Dispatch_queue_tqueue = dispatch_queue_create ("serial", nil );
      Dispatch_sync (queue, ^ (void ){
      NSLog (@ "This won't deadlock ");
      });

      Analyze the code. After adding a task to the serial queue named serial, GCD automatically creates a new thread to execute the block method in this thread. In this process, the main thread and the new thread are blocked, but it does not cause a deadlock.

      Why does adding A task to another serial queue not necessarily lead to deadlocks, because the queue can be nested, for example, adding a task A in a queue (Serial, in the task a, add Task B to the queue B (Serial), and add the task to the queue A in the task B. This indirectly satisfies "in a serial queue, synchronously Add a block to this queue ". However, it seems that no block is directly added to the same queue.

      The best way to determine whether a deadlock occurs is to check whether a task is added to the queue in the serial Queue (including the main queue column, of course. Because we know that each serial queue corresponds to a thread, we only need to call the method that will block this thread if it is not in a thread.

      In fact, we use the synchronous method for programming, which usually requires that the execution order between tasks is completely determined. Not to mention that GCD provides many powerful functions to meet this requirement, it is unreasonable to add synchronization tasks to the serial queue. After all, the queue is already serialized, simply add them asynchronously. Therefore, the simplest way to solve the deadlock example at the beginning of the article is to add a letter a at the right position.

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.