Multithreading GCD (1) and multithreading gcd

Source: Internet
Author: User

Multithreading GCD (1) and multithreading gcd

Grand Central Dispatch (GCD) is a technology developed by Apple and is a very good solution for multi-core devices. GCD has two core concepts:

Queue: the queue is responsible for managing tasks submitted by developers. GCD queues always process tasks in FIFO mode, therefore, the task to be processed does not necessarily end first. A queue can be either a serial queue or a concurrent queue. A serial queue processes only one task at a time. The next task can be executed only after the previous task is completed; A concurrent queue can process multiple tasks at the same time, So multiple tasks can be processed concurrently. The bottom layer of the queue maintains a thread pool to process user-submitted tasks. The thread pool is used to execute queue management tasks. The thread pool at the underlying layer of the serial queue only needs to maintain one thread, and the concurrent queue needs to maintain multiple threads. Task: a task is the unit of work that the user submits to the queue. These tasks are executed in the thread pool maintained at the underlying layer of the queue. These tasks are executed in multiple threads.

Follow Two Steps to use GCD:
1. Create a queue.
2. Submit the task to the queue.

Create a queue:
-"Serial queue": the underlying thread pool of the serial queue only needs one thread, so only one thread is provided to execute the task. Therefore, the latter task can be executed only after the execution of the previous task ends.
-Concurrent execution: The Thread Pool provides multiple threads to execute tasks. Therefore, you can start and execute multiple tasks in FIFO order.
(1) obtain the system's default global concurrency queue
Obtain the default global concurrency queue:
Dispatch_queue_t queue = dispatch_get_global_queue (
DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
(2) obtain the serial queue associated with the main thread of the system
Obtain the serial queue associated with the system main thread:
Dispatch_queue_t queue = dispatch_get_main_queue (); this is equivalent to executing a task directly in the main program thread.
(3) create a serial queue
Create a serial queue:
Dispatch_queue_t queue = dispatch_queue_create ("ios. queue", DISPATCH_QUEUE_SERIAL );
If multiple tasks are submitted to a serial queue, they can only be executed one by one in sequence.
(4) create a concurrent queue
Create a concurrent queue:
Dispatch_queue_t queue = dispatch_queue_create ("ios. queue", DISPATCH_QUEUE_CONCURRENT );
If you submit multiple tasks to the concurrent queue, you can start multiple concurrent tasks in the FIFO order.
After obtaining the queue, you can submit the task to the queue and run the task by the thread pool managed by the queue bottom layer.

Submit a task
Submit a task to the queue:
Void dispatch_async (dispatch_queue_t queue, dispatch_block_t block ):
Submits a code block to a specified queue asynchronously.
Void dispatch_async_f (dispatch_queue_t queue, void * context, dispatch_function_t work ):
Submits a function to a specified queue asynchronously.
Void dispatch_sync (dispatch_queue_t queue, dispatch_block_t block ):
Submit the code block to the specified queue synchronously.
Void dispatch_sync_f (dispatch_queue_t queue, void * context, dispatch_function_t block ):
Submit the function to the specified queue in synchronous Mode
Void dispatch_after (dispatch_time_t when, dispatch_queue_t queue, dispatch_block_t block ):
The code block is submitted to the specified queue asynchronously. The thread pool at the bottom of the queue is responsible for executing the code block at the time specified by when.
Void dispatch_after_f (dispatch_time_t when, dispatch_queue_t queue, void * context, dispatch_function_t work ):
The function is asynchronously submitted to the specified queue. The underlying thread pool of the queue is responsible for executing the function at the time specified by when.
Void dispatch_apply (size_t iterations, dispatch_queue_t queue, void (^ block) (size_t )):
The code block is submitted to the specified queue asynchronously. The underlying thread pool of the queue will execute the code block multiple times.
Void dispatch_apply (size_t iterations, dispatch_queue_t queue, voidContext, void (* work) (void, Size_t )):
The function is submitted to the specified queue asynchronously. The underlying thread pool of the queue will execute the function multiple times.
Void dispatch_once (dispatch_once_t * predicate, dispatch_block_t block ):
The code block is submitted to the specified queue. The thread pool at the bottom of the queue controls the function to be executed only once within a certain life cycle of the application. The predicate parameter is a pointer to the dispatch_once_t (essentially a long integer) variable, which is used to determine whether the code block has been executed.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.