GCD multithreading and iOSGCD multithreading in iOS

Source: Internet
Author: User

GCD multithreading and iOSGCD multithreading in iOS
GCD stands for Grand Central Dispatch.
Grand Central Dispatch (GCD) is a new solution developed by Apple for multi-core programming. It is mainly used to optimize applications to support multi-core processors and other Symmetric Multi-processing systems. It is a parallel task executed on the basis of the thread pool mode.
GCD provides an easy-to-use concurrency model to improve application response performance by delaying expensive computing tasks.
GCD provides dispatch queues to process code blocks. These queue management tasks you provide to GCD are executed in FIFO order. This ensures that the first task to be added to the queue is the first task in the queue, and the second task to be added will start the second task, until the end of the queue. GCD provides you with at least five specific queues, which can be selected based on the queue type.
The main thread is the only thread that can change the UI. When you need to operate the UI, it is best to put them in the main queue to reduce the lag.
There are four global scheduling Queues with different priorities: background low default high, and you can also create your own queues.
Func Dispatch_async (_Queue:Dispatch_queue_t,_Block:Dispatch_block_t)
This function is the most commonly used function in GCD. Two parameters are passed in. The second parameter is a closure, which contains the operation to be executed. The first parameter is a queue, indicates the queue on which the closure should be executed, and the function returns immediately without waiting for the completion of the entire closure operation
Func Dispatch_after (_When:Dispatch_time_t,
_ Queue: Dispatch_queue_t,_Block:Dispatch_block_t)
This function is equivalent to a delayed version of dispatch_async. The first parameter indicates the number of nanoseconds before the function is executed.
It is not a reliable option to use the if condition statement to ensure that a single instance is instantiated only once. At this time, the following function can ensure that a single instance is a single instance.
Func Dispatch_once (_Predicate:UnsafeMutablePointer<Dispatch_once_t>,_Block:Dispatch_block_t)
Whether an instance is initialized multiple times is not the only problem for a single instance. When an instance is read and written at the same time, the thread is not secure. In a custom concurrent queue, you can use the following functions to solve the problem:
Func Dispatch_barrier_async (_Queue:Dispatch_queue_t,_Block:Dispatch_block_t)
Dispatch barriers is a set of functions that act as a serial bottleneck when working in a concurrent queue. Use the barrier API to ensure that the submitted Block is the only executed entry in the specified queue at that specific time. This means that all entries submitted to the queue before the scheduling obstacle must be completed before the execution of this Block. When the Block arrives, the scheduling obstacle executes the Block and ensures that no other Block is executed in the queue during that time. Once completed, the queue returns to its default implementation status. GCD provides two types of obstacle functions: synchronous and asynchronous.
When you need to ensure that an operation written in the asynchronous closure can only start after the previous operation is completed, you need to use the synchronous function.
Func Dispatch_barrier_sync (_Queue:Dispatch_queue_t,_Block:Dispatch_block_t)
Dispatch_sync () synchronously submits the work and waits for it to finish before returning it. Putting this function in the main queue and custom serial queue may lead to deadlocks, the best choice is to place it in the concurrent queue.
The purpose of dispatch_group is to monitor the completion of multiple asynchronous tasks.
The Dispatch Group will notify you when the tasks of the entire Group are completed. These tasks can be synchronous or asynchronous, even in different queues. When the tasks of the entire Group are completed, the Dispatch Group can notify you synchronously or asynchronously. Because the tasks to be monitored are in different queues, use a dispatch_group_t instance to remember these different tasks.
Some methods have some completionblocks at the end, but when there is an asynchronous operation inside the method, the CompletionBlock may be called before all operations are completely completed.
To solve this problem, we can use dispatch_group_wait.
Because the dispatch_group_wait used is synchronous, it will block the current thread, so you need to use dispatch_async to put the entire method into the background queue to avoid blocking the main thread from using dispatch_group_create () internally () create a new dispatch_group_t instance dispatch_group_enter (dispatch_group_t instance) to enter a worker (same dispatch_group_t instance) and leave a worker (same dispatch_group_t instance, wait for Time) wait until all operations in the group are completed. after the work is completed, the dispatch_asyn function can be used to asynchronously switch back to the main thread for subsequent operations.
If dispatch_group_notify is used to replace dispatch_group_wait, you do not need to call dispatch_async at the beginning to put the method into another queue because the method is asynchronous and does not need independent dispatch_async at last, the operations in the original closure can be directly written in the closure of dispatch_group_notify, but the operations should also be placed in the main queue column.
FuncDispatch_apply (_Iterations:Int,
_Queue:Dispatch_queue_t!,_Block :(Int)->Void)
Using the above function in a custom concurrent queue can concurrently execute iterative operations to improve performance, but for small sets, the performance optimization caused by concurrent execution may not be as costly as creating and running a thread more often.

Copyright Disclaimer: This article is an original article by the blogger and cannot be reproduced without the permission of the blogger.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.