Objective-c in GCD

Source: Internet
Author: User

    1. Synchronous
    2. Asynchronous
    3. Parallel
    4. Serial
    5. Task Group
    6. Time to wait

There are two types of Dispatch queues:

    • 1.Serial Dispatch queue, serial queue, can only perform one of the tasks that are appended to the thread at a time in queue order (can perform tasks in parallel by creating multiple serial queue implementations (which degrades performance))
      • Serial queue Resolves a problem where multiple threads are updating the same resource to compete for data, and the task of manipulating that resource is executed in the same serial queue
    • 2.Concurrent Dispatch queue, parallel queue, task appended in this thread can be executed concurrently

There are two ways to get dispatch Queue (dispatch_queue_t type) objects:

  • 1. Through the C function dispatch_queue_create ("QueueName", NULL);
    • When the second argument is null, the queue created is a serial queue
    • When the second argument is a Dispatch_queue_concurrent macro, the queue created is a parallel queue
    • The queue generated by dispatch_queue_carete (or other function-generated objects that contain create in the GCD API) needs to be freed by calling Dispatch_release (dispatch_queue_t variable) itself
    • Dispatch_async (queue variable, ^{task block} to queue) function causes block to hold the queue, so you can immediately call the Dispatch_release (queue variable) function to release the queue after the function , the arc will actively call the Dispatch_release (queue variable) function to release the queue when the block is finished executing.
    • The Dispatch_retain function allows a variable to hold a queue
  • 2. The main Dispatch queue (main thread, serial) or global Dispatch queue (parallel, with 4 execution priorities) provided by the Get system standard: High Dispatch_queue_priority_high, Default Dispatch_queue_priority_default, Low dispatch_queue_priority_low, background dispatch_queue_priority_background)
    • Get main Dispatch queue such as: dispatch_queue_t Maindispatchqueue=dispatch_get_main_queue ();
    • Get global Dispatch queue such as: dispatch_queue_t Globaldispatchqueue=dispatch_get_global_queue (dispatch_queue_priority_ high,0);
    • The first parameter specifies the execution priority for the queue


To change the execution priority of the Dispatch_queue_t object generated by the Dispatch_queue_create function:

    • By calling Dispatch_set_target_queue (the queue object variable that needs to be changed, the queue object variable to be changed to the same priority as the queue) function
    • In all the queue, only the global Dispatch queue can specify a priority on fetch, so the second parameter of the Dispatch_set_target_queue function can be a global Dispatch Queue object variable as an entry parameter to change the queue execution priority of the first parameter

Methods to limit the dispatch queue's execution class:

    • Also by calling Dispatch_set_target_queue (typically limiting the serial target queue1, the Limit object is a serial queue2) function


You want to append the Block body task to a line approached after a specified time (the task is performed at a specified time, not necessarily after the specified time, because it is just a matter of adding the task to the thread after the specified time, and perhaps the thread has something else to do):

    • by calling Dispatch_after (dispatch_time_t object variable, dispatch_queue_t object variable, ^{append task block body}) function
    • The first parameter gets the dispatch_time_t object through Dispatch_time (start time, such as dispatch_time_now, how long delay, such as 3ull*nsec_per_sec) function
    • The Dispatch_time (,) function typically calculates a relative time by the user, and a dispatch_walltime () function is used to get the absolute time (slightly)

A method that performs a specified operation (which can be used for dependencies between tasks) after the full end of multiple processing (using a parallel queue or executing multiple serial queues at the same time) (not having the calling execution thread suspend waiting until the end of all executions):

    • By dispatch Group Task Force to append the management of the task, the specific method:
      • 1. When appending a block body task to a thread, specify the task group to which the task belongs (the thread that each task block is appended to does not necessarily need to be in the same thread, but needs to be in the same task group), by calling Dispatch_group_async (Dispatch_grooup _t the object variable, the task is appended to the executing thread queue,^{block the task Body}) function calls multiple times to append tasks to the task group
        • The first parameter gets the Dispatch_group_t object through the Dispatch_group_create () function
      • 2. Then by calling dispatch_group_notify (dispatch_grooup_t object variable, the thread that ends the task executes the queue is not necessarily the same as the other task threads in the task group, the ^{block Task Body}) function to append tasks after all tasks in a task group have been completed
      • 3. Finally release the Dispatch_group_t object obtained by the Dispatch_group_create () function, by calling Dispatch_release (dispatch_group_t object variable)


A method that waits for all tasks in a task group to complete (the thread that invokes execution suspends the wait before the end of all execution)
By dispatch Group Task Force to append the management of the task, the specific method:

    • 1. When appending a block body task to a thread, specify the task group to which the task belongs, by calling Dispatch_group_async (dispatch_grooup_t object variable, task appended to the executing thread queue,^{block the task Body}) function calls multiple append tasks to a task group multiple times
      • The first parameter gets the Dispatch_group_t object through the Dispatch_group_create () function
    • 2. Then by calling dispatch_group_wait (dispatch_grooup_t object variable, dispatch_time_t object variable wait time or macro dispatch_time_forever forever etc.) function waits for task group to finish executing (cannot be canceled halfway)
      • The dispatch_group_wait (,) function return value ==0 represents the wait time for all tasks in the task group to complete, or a process in the task group is still executing after the waiting time is exceeded
      • The thread that calls the dispatch_group_wait (,) function starts to stop after the call to Dispatch_group_wait (,), after the wait time, or after all the tasks in the task group have been executed.
    • 3. Finally release the Dispatch_group_t object obtained by the Dispatch_group_create () function, by calling Dispatch_release (dispatch_group_t object variable)

More efficient when you want to read a file or manipulate a database not only does all operations take the responsibility of a single serial thread, but all read and write operations are performed in parallel, the write operation is guaranteed to execute in the absence of any read processing, and the read operation is not executed until the write operation is not completed, effectively and efficiently handling the resource competition problem:

    • By calling the Dispatch_barrier_async/dispatch_barrier_sync function together with the parallel queue generated by the Dispatch_queue_create function, the fence is appended with the Dispatch_ Barrier_async/dispatch_barrier_sync is used only for parallel queue, specifically:
      • 1.dispatch_queue_create (,) function to generate parallel queue
      • 2. Call Dispatch_async (parallel queue,^{read task block}) when reading to append read operations to parallel queue
      • 3. When a write operation is required, call Dispatch_barrier_async/dispatch_barrier_sync (the same parallel queue,^{write task block}) function as the read operation
      • 4. Call Dispatch_async (parallel queue,^{read task block}) when reading to append read operations to parallel queue
      • (Principle: Parallel queue If it is found that the next task is to be handled by the block is added by the fence, then the current concurrent thread is already executing the task block is executing, only to execute this fence task blocks, Wait until this fence block executes and then go back to executing the remaining task body in the concurrent thread.


You want to append the specified task block body to the specified dispatch_queue_t (a parallel or serial queue can) at a specified number of times, and wait for all of the methods to process the execution to completion:

    • Implemented by calling Dispatch_apply (specified append count, specified parallel thread, ^ (size_t index) {Specified task block body}) function
    • When the function is called by the thread that called the function, it waits for the batch to execute until the task body finishes executing. So the thread that calls the function is half a child thread and is called in the block body of the task that is asynchronously appended to the child thread
    • Does not invoke the interface of a child thread, which is not called in the asynchronous append task block body, causes a deadlock


When a thread performs a number of operations to suspend a thread or resume a thread:

    • The specified thread can be suspended by calling the Dispatch_suspend (dispatch_queue_t object variable) function
    • Resume a task by calling the Dispatch_resume (dispatch_queue_t object variable) function to restore the specified thread


You want to ensure that only one method of specifying processing is performed during the reference program execution:

    • by calling Dispatch_once (&dispatch_once_t object variable, ^{specified block body}) function implementation
    • Where the first parameter dispatch_once_t object variable needs to be static, such as direct static dispatch_once_t token; pass &token to the first parameter
    • Typically used to create a singleton class, the following syntax:
      • + (ID) sharedinstance{
      • Static MyObject *myobjectmanager=nil;
      • Static dispatch_once_t Oncetoken;
      • Dispatch_once (&oncetoken,^{
      • Myobjectmanager=[[self Alloc]init];
      • });
      • return myobjectmanager;
      • }


Ways to perform finer-grained exclusive controls:

    • Treatment by DISPATHC Semaphore (slightly)

Ways to improve file read speed:

    • Through dispatch I/O processing (slightly)

There are two ways to append tasks to the dispatch queue:

  • 1. Asynchronous Append, call C function Dispatch_async (dispatch_queue_t type of queue variable, dispatch_block_t type block body ^{to queue to append the task Body block})
    • The thread that the task is appended to does not wait for the appended task to finish executing the block body
  • 2. Synchronous append, call C function Dispatch_sync (dispatch_queue_t type of queue variable, dispatch_block_t type block body ^{to queue to append the task Body block})
    • The thread to which the task is appended waits for the appended task block to complete, which is the simple version of the dispatch_group_wait function
  • Synchronous append is prone to deadlock, such as to the main thread synchronous append task, because code execution, the main thread in the execution of the main thread synchronous append task, so synchronous append task main thread never execute, and because it is synchronous append, so the main thread has been waiting for the append task to complete,
  • Causes a deadlock, that is, no matter when the main thread or the main thread to perform the task of synchronous append task, or the task body synchronization to the thread can never be called append function is called dispatch_sync (,) function when the thread and its thread external thread (see a deadlock example)
  • A synchronous Wait example is required such as:
    • There is a thread such as the main path to execute the following code
    • NSString *[email protected] "XXXXX";
    • -(NSString *) getastring{
    • dispatch_queue_t myqueue=dispatch_queue_creat ("Handleastringqueue", NULL);
    • __block NSString *astring;//uses _astring member variables to solve multi-threaded competition, so _astring operations are required to operate in Handleastringqueue serial queue and cannot be returned directly in the queue The astring member object, because return in the block is only the return value of the block, so you need to declare a substitution variable
    • Dispatch_sync (myqueue,^{astring=_astring});//The main thread executes here, because it is synchronous to append the block task to the myqueue processing execution, So the main thread hangs until the myqueue thread finishes executing the additional block task body.
    • Return astring;//The main thread executes here because the above is synchronous execution, so the block task body must have been executed, so astring has been assigned, you can go back out. If the above is an asynchronous append task, then the main thread will append the block task body to the myqueue, unequal to the task body to the astring assignment immediately executes the return statement, this is not up to the original intention.
    • Lead to deadlock examples such as: (Surface-dead Lock)
    • Dispatch_sync (Dispatch_get_main_queue (), ^{
    • NSLog (@ "content ...");
    • });
    • If the above code is running on the main thread, it will cause a deadlock.
    • Lead to deadlock examples such as: (Deep deadlock)
    • dispatch_queue_t queuea=dispathc_queue_create ("QueueA Name", NULL);
    • dispatch_queue_t queueb=dispatch_queue_create ("Queueb Name", NULL);
    • Although the following code is called on another thread:
    • Dispatch_sync (queuea,^{//executes the following code in QueueA
    • Dispatch_sync (Queueb,^{//queuea wait for Queueb to execute the following code
    • Dispatch_sync (Queuea,^{//queueb wait for QueueA to execute the following code
    • NSLog (@ "content ...");//queuea never perform here, because the QueueA above the sync wait hangs, so QueueA is deadlocked
    • });
    • });
    • });

Objective-c in GCD

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.