iOS three multithreading technology 1.NSThread each Nsthread object corresponds to one thread, the magnitude is lighter (real multi-threaded) 2. The following two points are Apple's "concurrency" technology specifically developed to allow programmers to stop worrying about thread-specific usage issues ønsoperation/ Nsoperationqueue Object-oriented threading technology Øgcd--grand Central Dispatch (distributed) is a C-based framework that can take full advantage of multicore and is the recommended multithreaded technology for Apple
The above three kinds of programming from top to bottom, the level of abstraction from low to high, the higher the abstraction of the more simple to use, is also the most recommended by Apple, many of the framework technology in the project using different multithreading technology.
2. Comparison of three multithreading techniques nsthread:– Pros: Nsthread is lighter than the other two lightweight, easy to use – Cons: You need to manage your thread's life cycle, thread synchronization, locking, sleep, and wake up. Thread synchronization has a certain overhead for locking data nsoperation:– do not need to care about thread management, data synchronization, you can focus on the actions you need to perform –nsoperation is an object-oriented Gcd:–grand Central Dispatch is a multi-core programming solution developed by Apple. Ios4.0+ can be used, is an alternative to nsthread, Nsoperation's efficient and powerful technology –GCD is based on the C language
What is GCD?
Grand Central Dispatch or GCD, a low-level API, provides a new way to write concurrent programs. From a basic function, GCD is a bit like nsoperationqueue, and they all allow the program to split the task into multiple single tasks and then commit to the work queue to execute concurrently or serially. GCD is more efficient than the nsopertionqueue, and it is not part of the cocoa framework.
In addition to the parallel execution capabilities of the code, GCD also provides a highly integrated event control system. You can set a handle to respond to file descriptors, Mach ports (Mach port for interprocess communication on OS x), processes, timers, signals, user-generated events. These handles are executed concurrently through GCD.
GCD's API is largely based on blocks, and of course, GCD can also be used from blocks, such as using the traditional C mechanism to provide function pointers and context pointers. The practice proves that when used with block, the GCD is very easy to use and can maximize its capabilities.
You can get GCD's documentation by tapping the command "man Dispatch" on your Mac.
Why use?
GCD offers many advantages beyond traditional multithreaded programming:
- Easy to use: GCD than the thread and easy to use. Because GCD is based on work unit rather than on operations like thread, GCD can control tasks such as waiting for task completion, monitoring file descriptors, periodic code execution, and work hangs. Block-based descent makes it extremely simple to pass the context between different code scopes.
- Efficiency: GCD is implemented so lightly and elegantly that it is more practical and fast in many places than it is dedicated to creating resource-intensive threads. This is related to ease of use: part of the reason why GCD can be used is that you don't have to worry too much about efficiency, just using it.
- Performance: GCD automatically increments the number of threads based on system load, which reduces context switching and increases computational efficiency.
Dispatch Objects
Although GCD is a pure C language, it is formed into an object-oriented style. The GCD object is known as dispatch. Dispatch object is a reference count like the Cocoa objects. Use the Dispatch_release and Dispatch_retain functions to manipulate the reference count of the dispatch object for memory management. But unlike cocoa objects, dispatch object does not participate in the garbage collection system, so even if you turn on GC, you must manually manage the memory of the GCD object.
Dispatch queues and Dispatch sources (which are described later) can be suspended and restored, can have an associated arbitrary context pointer, and can have an associated task completion trigger function. You can refer to "Man dispatch_object" For more information on these features.
Dispatch Queues
The basic concept of GCD is the dispatch queue. The dispatch queue is an object that can accept tasks and execute them in first-to-first-run order. The dispatch queue can be concurrent or serial. Concurrent tasks are carried out appropriately in parallel as Nsoperationqueue, based on the system load, and serial queues perform only a single task at a time.
There are three types of queues in GCD:
- The main queue: the same as the main thread function. In fact, the task that is submitted to the main queue executes in the main thread. The main queue can be obtained by calling Dispatch_get_main_queue (). Because the main queue is associated with the main thread, this is a serial queue.
- Globals queues: The global queue is a concurrent queue and is shared by the entire process. There are three global queues in the process: High, Medium (default), low three priority queues. You can call the Dispatch_get_global_queue function to pass in the priority level to access the queue.
- User queue: User queue (GCD does not call this queue, but does not have a specific name to describe the queue, so we call it a user queue) is adispatch_queue_createqueue created with a function. These queues are serial. Because of this, they can be used to complete the synchronization mechanism, a bit like a mutex in a traditional thread.
There are several ways in which Dispatch queues can be generated:
1. dispatch_queue_t queue = dispatch_queue_create ("com.dispatch.serial", dispatch_queue_serial); Generates a serial queue in which the blocks in the queue are executed in first in, Out (FIFO) order, which is actually a single thread. The first parameter is the name of the queue, which is useful when debugging a program, with all the same names as possible.
2. dispatch_queue_t queue = dispatch_queue_create ("Com.dispatch.concurrent", dispatch_queue_concurrent); Generates a concurrent execution queue, and the block is distributed to multiple threads to execute
3. dispatch_queue_t queue = Dispatch_get_global_queue (Dispatch_queue_priority_default, 0); Gets the concurrent queue generated by the program process by default, which can be prioritized to select High, Medium, and low three priority queues. Because the system is generated by default, Dispatch_resume () and dispatch_suspend () cannot be called to control execution continuation or interruption. It is important to note that three queues do not represent three threads and there may be more threads. Concurrent queues can automatically generate a reasonable number of threads based on the actual situation, or they can be understood as a thread pool managed by the dispatch queue and transparent to the program logic.
The official website document explains that there are three concurrent queues, but there is actually a lower priority queue with a priority of Dispatch_queue_priority_background. The individual dispatch queues that are being used can be observed during Xcode debugging.
4. dispatch_queue_t queue = Dispatch_get_main_queue (); Get the dispatch queue for the main thread, which is actually a serial queue. It is also impossible to control the continuation or interruption of the main thread dispatch queue execution.
Next we can use the Dispatch_async or Dispatch_sync function to load the block that needs to be run.
Dispatch_async (Queue, ^{
Block Specific code
}); Execute block asynchronously, function returns immediately
Dispatch_sync (Queue, ^{
Block Specific code
}); Execute block synchronously, function does not return, wait until block executes. The compiler will optimize the code according to the actual situation, so sometimes you will find that the block is actually executing on the current thread and does not create a new thread. </pre>
The actual programming experience tells us to avoid using dispatch_sync as much as possible, and it is easy to cause the deadlock of the program when nested in use.
If Queue1 is a serial queue, this code immediately generates a deadlock:
Dispatch_sync (Queue1, ^{
Dispatch_sync (Queue1, ^{
......
});
......
});
In practice, it is generally possible to write using dispatch, a common multithreaded execution model for network request data:
Dispatch_async (Dispatch_get_global_queue (dispatch_queue_priority_default, 0), ^{
Start Network request data in a child thread
Updating the data Model
Dispatch_sync (Dispatch_get_main_queue (), ^{
Updating UI code in the main thread
});
});
The program's background runs and UI update code is compact and the code logic is at a glance.
The
Dispatch queue is thread-safe and can be used to implement a lock with a serial queue. For example, multi-threaded write the same database, need to keep the write order and the integrity of each write, simple to use the serial queue can be implemented:
dispatch_queue_t queue1 = Dispatch_queue_create (" Com.dispatch.writedb ", dispatch_queue_serial);
-(void) Writedb: (NSData *) data
{
Dispatch_async (queue1, ^{
//write database
}) ;
}
Next call to Writedb: You must wait until the last call is complete before you can make the Writedb: The method is thread-safe.
The
Dispatch queue also implements some other common functions, including:
void Dispatch_apply (size_t iterations, dispatch_queue_t queue, void (^block) ( size_t)); To repeat the block, it is important to note that this method is returned in a synchronous manner, that is, until all blocks have been executed before returning, and if an asynchronous return is required, it is nested in Dispatch_async. Whether multiple blocks run concurrently or serially is also dependent on whether the queue is concurrent or serial.
void Dispatch_barrier_async (dispatch_queue_t queue, dispatch_block_t block);//This function sets the block to be executed synchronously, It waits until the block before it joins the queue finishes executing. After it joins the block of the queue, it waits until the block has been executed before it executes.
void Dispatch_barrier_sync (dispatch_queue_t queue, dispatch_block_t block);//Ibid., except that it is a synchronous return function
Void Dispatch_after (dispatch_time_t when, dispatch_queue_t queue, dispatch_block_t block); Deferred execution of block
Finally, let's look at a very special function of the dispatch queue:
void Dispatch_set_target_queue (dispatch_object_t object, dispatch_queue_t queue);
It assigns the task object that needs to be executed to a different queue for processing, the task object can be a dispatch queue, or it can be a dispatch source (described later Boven). And this process can be dynamic, can realize the dynamic scheduling of the queue management and so on. For example, there are two queues Dispatcha and Dispatchb, when Dispatcha is assigned to DISPATCHB:
Dispatch_set_target_queue (Dispatcha, DISPATCHB);
Blocks that are not yet running on the Dispatcha will run on DISPATCHB. At this point, if you pause Dispatcha run:
Dispatch_suspend (Dispatcha);
Will only halt the execution of the original block on the Dispatcha, and the DISPATCHB block is unaffected. If the DISPATCHB is paused, the run of Dispatcha is paused.
Eg: for reference:
http://www.dreamingwish.com/dream-2012/gcd%E4%BB%8B%E7%BB%8D%EF%BC%88%E4%B8%80%EF%BC%89-%E5%9F%BA%E6%9C%AC%E6% A6%82%e5%bf%b5%e5%92%8cdispatch-queue.html
Http://www.cnblogs.com/sunfrog/p/3305614.html
Reprint: http://www.iliunian.com/2923.html
iOS Development GCD Introduction: Basic Concepts and dispatch Queue