For iOS multithreaded development, we are always in the learning, in reading, look at the document, the project development, can go to improve themselves. Recently read the "Objective-c Advanced programming iOS and OS X multithreading and Memory Management" This book, the multithreading has a more in-depth understanding, so do a summary and record. I have uploaded this book to the Https://pan.baidu.com/s/1c2fX3EC, which is one of the must-read books for iOS developers and is well written and welcome to download and read. The cover of the book is as follows, so also called The Lion Book:
。
(1) Problems encountered by multithreading
。
What is the problem with multithreading? When more than one thread modifies the same data, it causes inconsistent data, and when multiple threads wait for each other to cause a deadlock, too many threads concurrently consume memory. So multi-threaded bug can seriously affect the function and performance of the app.
(2) The role of multithreading
Why must we use multithreading? As long as there is a main thread can not do it.
。
As can be seen, if we have a time-consuming operation in the main thread execution, then it will seriously block the execution of the main thread, resulting in the main interface can not respond to the user's actions in a timely manner, there is a stuck state. After creating multi-threading, we can put the time-consuming operation into other threads to perform, such as network operation, image upload and download, and then return to the main thread update interface after successful execution. Therefore, it is necessary to use multithreading.
(3) Queues in GCD--dispatch queue
。
The Dispatch queue is the dispatch queue in the GCD, and as long as we add the task to the queue, the threads are taken out in sequence and executed, according to the principle of FIFO first.
(4) Types of Dispatch queue
There are two types of Dispatch queue in GCD, one is the serial Dispatch queue (serial queue) waiting to be processed now, and the other is the concurrent Dispatch queue (the concurrent queue) that does not wait for the current execution to process.
Serial Dispatch Queue: Wait for the execution to finish in the process now;
Concurrent Dispatch Queue: Does not wait for the current execution to finish, and multiple threads are used to perform several processing at the same time.
。
Serial Dispatch Queue Why do you have to wait until the end of processing to perform the next processing? Because the serial Dispatch queue has only one thread, a thread can only perform one task at a time, so the successor must wait.
Concurrent Dispatch Queue Since it is possible to create multiple threads, it is not necessary to wait for the previous processing to finish, so long as the task is taken out of order, the task is placed on a different thread to execute, and the task is not executed. It looks as if multiple tasks are executing at the same time, and it is actually executing at the same time. Of course, this concurrency system will have a limit, and our code can set the maximum number of concurrent numbers.
Looking at the above explanation, we know serial Dispatch queue, Concurrent Dispatch queue and thread relationship, the relationship is as follows:
。
(5) Multiple serial Dispatch queue implementation concurrency, and problems encountered
When multiple serial Dispatch queues are generated, each serial Dispatch queue is executed in parallel. While only one append processing can be performed in a serial Dispatch queue, if the processing is appended to 4 serial Dispatch queues, each serial Dispatch queue executes one, which means that four processing can be performed concurrently.
。
Although this is a stupid way to achieve concurrency, but also a big problem, that is to consume a lot of memory:
。
(6) Solving the problem of resource competition
When multiple threads operate on the same data, it can result in competition or inconsistent data. The simplest solution is to use the serial Dispatch Queue. Serial the Dispatch queue creates only one thread at a time, and only one task at a time, and only if the task executes at the end to execute the next, so the access to a competing resource is unique at the same moment. As follows:
。
(7) The generated dispatch queue must be released by the programmer. This is because the dispatch queue does not have the technology to handle as a OC object like block. The dispatch queue generated through the Dispatch_queue_create function is released through dispatch_release after the use is finished. Take a look at the following example:
。
Is it a problem to release the queue immediately?
After appending the block to the dispatch queue in the Dispatch_async function, even if the dispatch queue is released immediately, the dispatch queue will not be discarded because it is held by block, so blocks can execute. The dispatch queue is freed when the block executes, and no one is holding the dispatch queue, so it will be discarded.
(8) Dispatch Queue provided by the system standard
--Main Dispatch queue: The queue executed in the main thread, because the main thread has only one, so the main Dispatch queue is naturally the serial Dispatch queue. The processing appended to the main Dispatch queue is executed in the runloop of the main thread.
。
--Global Dispatch queue: A concurrent Dispatch queue that is available to all applications, it is not necessary to generate concurrent Dispatch by dispatch_queue_create functions Queue, as long as you get the global Dispatch queue. There are four priorities, but the threads used for the global Dispatch queue are not guaranteed to be real-time, so the execution priority is just a general judgment.
The Dispatch_retain function and Dispatch_release function for the main Dispatch queue and global Dispatch queue do not cause any changes or problems. This is also why it is easier to get and use the global Dispatch queue than to build, use, and release concurrent Dispatch queue.
(9) Dispatch_set_target_queue: Change the execution priority of the generated dispatch queue
Specifies that the Dispatch queue to change the execution priority is the first parameter of the Dispatch_set_target_queue function, specifying the same global Dispatch queue as the execution priority to use as the second parameter (the target), The first parameter if you specify the system-supplied main Dispatch queue and the global Dispatch queue, you will not know what state is present, so none of these are specified.
(dispatch_after): Deferred execution
Nsec_per_sec: Sec
Nsec_per_msec: MS
(one) Dispatch_barrier_async
Waits for the processing of the parallel execution on the concurrent Dispatch queue to be appended to the end of all, and appends the specified processing to the concurrent Dispatch queue. Then after the processing that is appended by the Dispatch_barrier_async function is finished, the Concurrent Dispatch queue reverts to the normal action, and the processing of the Dispatch queue is appended to the Concurrent to begin parallel execution. As follows:
..
(Dispatch_async)
Appends the specified block "non-synchronous" to the specified dispatch queue, and the Dispatch_async function does not do any waiting.
。
(Dispatch_sync) problems caused by the
Once the Dispatch_sync function is called, the function does not return until the specified processing execution finishes. But Dispatch_sync can easily cause deadlocks.
。
The source code executes the specified block in the main Dispatch queue, which is the main thread, and waits for its execution to finish. In fact, these source code is being executed in the main thread, so the block appended to the main Dispatch queue cannot be executed. The following example is the same:
。
Block execution in main Dispatch queue waits for the end of block execution to be performed in main Dispatch queue. Of course serial Dispatch queue can also cause the same problem.
。
(dispatch_apply)
The Dispatch_apply function is the associated API for the Dispatch_sync function and dispatch group. The function appends the specified block to the specified dispatch queue for a specified number of times, and waits until all processing is complete.
Because processing is performed in the global Dispatch queue, the execution time of each processing is variable, and is an append task that does not wait. But the last done in the output must be in the final position. This is because the Dispatch_apply function waits for the end of all processing execution.
。
Because the Dispatch_apply function is also the same as the Dispatch_sync function, it waits for the processing execution to end, so it is recommended to perform dispatch_apply functions that are not synchronized in the Dispatch_async function.
。
(Dispatch_suspend/dispatch_resume)
When you append a lot of processing to the dispatch queue, you sometimes want to not perform the appended processing during the append process. In this case, you can just suspend the dispatch queue. Resumes when it can be executed.
--The Dispatch_suspend function suspends the specified dispatch Queue:
Dispatch_suspend (queue);
The Dispatch_resume function restores the specified dispatch Queue:
Dispatch_resume (queue);
These functions have no effect on the processing that has been performed. After suspending, processing that is appended to the dispatch queue but not yet executed is stopped after this. Recovery allows these processes to continue to execute.
(+) Dispatch Semaphore
A bug that occurs when a semaphore is not used.
.
Here, the Nsmutablearray class object is updated with the global Dispatch queue, so there is a high probability that a memory error after execution causes the application to end unexpectedly.
The Dispatch semaphore is a count signal that holds a count of the type signals in multithreaded programming. The count is 0 o'clock wait, the count is 1 or greater than 1 o'clock, minus 1 without waiting.
Create semaphore:
The parameter represents the initial value of the count.
。
The Dispatch_semaphore_wait function waits for a count value of dispatch semaphore to reach or equal to 1. When the count value is greater than or equal to 1, or if the count value is greater than or equal to 1 o'clock in the wait, the count is subtracted from the Dispatch_semaphore The _wait function returns.
Semaphore can be processed in the following branches:
The dispatch_semaphore_wait function returns 0 o'clock, and the handling of exclusive controls is performed safely. At the end of this processing, the count value of dispatch semaphore is added by 1 by the dispatch_semaphore_signal function.
Case:
。
(+) Dispatch_once
The Dispatch_once function is an API that guarantees that only one specified processing is performed during application execution. The following frequently occurring source code for initialization can be simplified by the dispatch_once function:
。
In multi-core CPUs, it is possible to perform initialization processing more than once while updating a flag variable that represents initialization. You don't have to worry about initializing with the Dispatch_once function. This is the singleton pattern, which is used when generating singleton objects.
Basic realization and description of GCD
Apple official note: Typically, the code for thread management written in an application is implemented at the system level.
What is a system-level implementation? is implemented on iOS and MacOS core XNU kernel level. Therefore, no matter how hard the programmer tries to write code to manage threads, it is not possible to outperform the GCD implemented at the XNU kernel level in terms of performance.
Software components used to implement the dispatch queue:
。
The Dispatch queue does not have the concept of "cancellation". Once processing is appended to the dispatch queue, there is no way to remove the processing or cancel the processing in the execution.
The use and multithreading development of iOS Multithreading Development--GCD (II.)