Learn GCD every minute and learn gcd

Source: Internet
Author: User

Learn GCD every minute and learn gcd

2014

What is GCD?

Grand Central Dispatch (GCD) is one of the technologies used to execute tasks asynchronously. Generally, the code used for thread management described in the application is implemented in the system level. Because thread management is implemented as a part of the system, it can be managed in a unified manner or executed tasks, which is more efficient than the previous threads.

That is to say, GCD implements extremely complex multi-threaded programming with our incredible and concise Descriptive methods.

Dispatch_async (queue, ^ {// Processing For A Long Time // For example, database access // instance Image Recognition // The processing ends after a long time. The main thread uses the processing result dispatch_async (dispatch_get_main_queue (), ^ {// processing that can be executed only in the main thread // For example, interface update })})

The above process takes a long time in the background thread. When the processing ends, the main thread uses this processing result.


Multi-Thread Programming

Although there are many CPU-related technologies, the number of CPU commands that can be executed by one CPU kernel at a time is always 1. So how can we execute the CPU command column in multiple paths?

The core XNU kernel of ox x and ios switches the path when an operating system event occurs (such as calling the system at a certain time. The status of the path in execution, such as the CPU register information, is saved to the memory block dedicated to the respective path, and the CPU register information is restored from the memory block dedicated to the switched Destination path, continue to execute the switch path CPU command column, which is called "context switch ".

Because a multi-threaded program can perform context switching multiple times between a thread and other threads, it looks like one CPU core can execute multiple threads concurrently.

However, multithreading is actually a programming technology that is prone to various problems. For example, when multiple threads update the same resources, data inconsistency may occur. When threads stop the waiting time, multiple threads will wait for each other continuously, and too many threads will consume a lot of memory.

When an application starts, it uses the first thread to execute, that is, the main process, to depict the user interface and process touch screen events. If the processing takes a long time in the main thread, the execution of the main process will be blocked. This is why long processing is not executed in the main thread but in other threads.


GCD API

Apple's official GCD description.

The developer only needs to define the task to be executed and append it to the Dispatch Queue.

Dispatch_async (queue, ^ {// task to be executed });
Append the value to the Dispatch queue of the variable Queue through the dispatch_async function. In this way, the specified Block can be executed in another thread.

What is Dispatch Queue? As shown in the name, it is the waiting queue for processing. Application programmers use APIs such as the dispatch_async function to describe the processing to be executed in the Block syntax and append it to the Dispatch Queue. In the order of append, first-in-first-out processing is performed.

In addition, there are two types of Dispatch Queue in execution, one is waiting for the Serial Dispatch Queue to be processed in the current execution, and the other is not waiting for the Concurrent Dispatch Queue to be processed in the current execution.

So how can we get these Dispatch Queue? There are two methods.

Dispatch_queue_create

The first method is to generate a Dispatch Queue through The gcd api.

dispatch_queue_t mySerialDispatchQueue = dispatch_queue_create("com.example.gcd.mySerialDispatchQueue", NULL);
Before explaining the dispatch_queue_create function, let's take a look at precautions for the number of Serial Dispatch Queue generation.

As mentioned above, Concurrent Dispatch Queue executes multiple append operations in parallel, while Serial Dispatch Queue can only execute one append operation at the same time. Although Serial Dispatch Queue and Concurrent Dispatch Queue are restricted by system resources, the dispatch_queue_create function can generate any number of Dispatch Queue.

When multiple Serial Dispatch Queue are generated, each Serial Dispatch Queue is executed in parallel. Although only one append operation can be performed in one Serial Dispatch Queue, if the process is appended to four Serial Dispatch Queue, each Serial Dispatch Queue executes one, that is, four processes are executed simultaneously.

Once the Serial Dispatch Queue is generated and appended, the system generates and uses only one thread for a Serial Dispatch Queue. If 2000 Serial Dispatch Queue is generated, 2000 threads are generated.

Like the multi-threaded programming problems listed earlier, if you use too many threads, a large amount of memory will be consumed, resulting in a large number of context switches, greatly reducing the system's response performance.

Next, let's continue with the dispatch_queue_create function. The first parameter of this function specifies the name of the Serial Dispatch Queue. If it is too troublesome to set it to NULL, you may regret not signing the Serial Dispatch Queue during debugging.

When generating the Serial Dispatch Queue, specify the second parameter as NULL as the source code. Specify DISPATCH_QUEUE_CONCURRENT when generating the Concurrent Dispatch Queue.

dispatch_queue_t myConcurrentDispatchQueue = dispatch_queue_create("com.example.gcd.MyConcurrentDispatchQueue", DISPATCH_QUEUE_CONCURRENT);
The Return Value of the dispatch_queue_create function is the dispatch_queue_t type that represents the Dispatch Queue. The queue variable in the source code is of the dispatch_queue_t type.

dispatch_queue_t myConcurrentDispatchQueue = dispatch_queue_create("com.example.gcd.MyConcurrentDispatchQueue", DISPATCH_QUEUE_CONCURRENT);    dispatch_async(myConcurrentDispatchQueue, ^{        NSLog(@"block on myConcurrentDispatchQueue");    });
This code executes the specified Block in the Concurrent Dispatch Queue.

In addition, if the program you deploy runs above ios6, the ARC will automatically manage it; otherwise, you need to add

dispatch_release(myConcurrentDispatchQueue);

Main Dispatch Queue/Global Dispatch Queue

The second method is to obtain the Dispatch Queue provided by the system standard.

In fact, we do not need to generate the Dispatch Queue system to provide us with a few. That is, Main Dispatch Queue and Global Dispatch Queue.

The Main Dispatch Queue is the Dispatch Queue executed in the Main thread, just like the Main in its name. Because there are only one Main thread, the Main Dispatch Queue is Serial Dispatch Queue.

The processing of append to Main Dispatch Queue is executed in the RunLoop of the Main thread, therefore, you must append some operations that must be executed in the Main thread, such as interface updates on the user interface, to the Main Dispatch Queue.

Another Global Dispatch Queue is the Concurrent Dispatch Queue that can be used by all applications. There is no need to use the dispatch_queue_create function to generate a Concurrent Dispatch Queue one by one. You only need to obtain the Global Dispatch Queue.

In addition, the Global Dispatch Queue has four execution priorities: high priority, default priority, low priority, and background priority.

However, the threads using the XNU kernel for Global Dispatch Queue cannot guarantee real-time performance. Therefore, the execution priority is only a rough judgment. For example, if the execution of the processed content is dispensable, the background priority Global Dispatch Queue can only be used.

// Obtain the Main Dispatch Queue using dispatch_queue_t mainDispatchQueue = Queue (); // obtain the dispatch_queue_t globalDispatchQueueHigh = Queue (Queue, 0 ); // obtain the Global Dispatch Queue (default priority) method dispatch_queue_t globalDispatchQueueDefault = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); // Global Dispatch Queue (low priority) the method for obtaining dispatch_queue_t globalDispatchQueueLow = Queue (priority, 0); // The method for obtaining Global Dispatch Queue (background priority) dispatch_queue_t globalDispatchQueueBackground = Queue (priority, 0 );
The following code uses the Main Dispatch Queue and Global Dispatch Queue.

// Execute Block dispatch_async (dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0) in the Global Dispatch Queue with the default priority ), ^ {// processing of parallel execution // execute Block dispatch_async (dispatch_get_main_queue () in the Main Dispatch Queue, ^ {// processing that can only be executed in the Main thread });});

Dispatch_set_target_queue

The Dispatch Queue generated by the dispatch_set_target_queue function uses the thread with the same execution priority as the default Global Dispatch Queue, whether it is Serial Dispatch Queue or Concurrent Dispatch Queue. The execution priority of the change-generated Dispatch Queue must use the dispatch_set_target_queue function. The following describes how to generate the Serial Dispatch Queue for processing operations in the background.

    dispatch_queue_t mySerialDispatchQueue = dispatch_queue_create("com.example.gcd.mySerialDispatchQueue", NULL);    dispatch_queue_t globalDispatchQueueBackground= dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0);    dispatch_set_target_queue(mySerialDispatchQueue, globalDispatchQueueBackground);
Specify the Dispatch Queue to change the execution priority as the first parameter of the dispatch_set_target_queue function, and specify the Global Dispatch Queue with the same priority as the execution priority to be used as the second parameter. The first parameter cannot specify the Main Dispatch Queue and Global Dispatch Queue provided by the system.

Specify the Dispatch Queue as the parameter of the dispatch_set_target_queue function. This parameter can not only change the execution priority of the Dispatch Queue, but also be the execution class of the Dispatch Queue. If you use the dispatch_set_target_queue function in multiple Serial Dispatch Queue to specify the target as a Serial Dispatch Queue, then multiple Serial Dispatch Queue that should have been executed in parallel, only one processing can be performed at the same time on the target Serial Dispatch Queue.

When you must append non-parallel processing to multiple Serial Dispatch Queue, if you use the dispatch_set_target_queue function to specify the target as a Serial Dispatch Queue, you can prevent concurrent processing.

Dispatch_after

It is often the case that you want to execute the processing in 3 seconds. You can use the dispatch_after function to perform processing after a specified time.

The source code of appending the specified Block to the Main Dispatch Queue in 3 seconds is as follows:

    dispatch_time_t time=dispatch_time(DISPATCH_TIME_NOW, 3ull*NSEC_PER_SEC);    dispatch_after(time, dispatch_get_main_queue(), ^{        NSLog(@"waited at least three seconds.");    });
Note that the dispatch_after function does not execute processing after the specified time, but only appends the processing to the Dispatch Queue at the specified time. This source code is the same as appending a Block to the Main Dispatch Queue with the dispatch_async function three seconds later.

Because the Main Dispatch Queue is executed in the RunLoop of the Main thread, for example, in the RunLoop executed every 1/60 seconds, the Block can be executed after 3 seconds, and the slowest is executed after 3 seconds + 1/60 seconds, in addition, when the Main Dispatch Queue has a large amount of append processing or the processing of the Main thread itself has a delay, this time will be longer.

Although problems may occur during use with strict time requirements, this function is very effective when you want to roughly delay processing.

The second parameter specifies the Dispatch Queue to be appended, and the third parameter specifies the Block to be processed.

The first parameter is the value of the dispatch_time_t type for the specified time. Use the dispatch_time function or the dispatch_walltime function.

The dispatch_time function can be used to obtain the time specified in the first parameter dispatch_time_t type value and the time after the second parameter's specified unit time. The first parameter often uses the DISOATCH_TIME_NOW value in the source code. The current time.

The product of the value and NSEC_PER_SEC obtains the value in the unit of milliseconds. Ull is the literal value of the C language, indicating unsigned long. If NSEC_PER_MSEC is used, it can be calculated in milliseconds.

Dispatch Group

This situation often occurs when multiple processes appended to the Dispatch Queue end and want to end the processing. When only one Serial Dispatch Queue is used, you only need to append all the operations you want to perform to the Serial Dispatch Queue and end the process at the end of the append. However, when you use Concurrent Dispatch Queue or multiple Dispatch Queue at the same time, the source code becomes quite complex.

In this case, use Dispatch Group. For example, the following code appends three blocks to the Global Dispatch Queue. If all these blocks are completed, the Block used to end processing in the Main Dispatch Queue is executed.

    dispatch_queue_t queue=dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);    dispatch_group_t group=dispatch_group_create();        dispatch_group_async(group, queue, ^{        NSLog(@"blk0");    });    dispatch_group_async(group, queue, ^{        NSLog(@"blk1");    });    dispatch_group_async(group, queue, ^{        NSLog(@"blk2");    });        dispatch_group_notify(group, dispatch_get_main_queue(), ^{        NSLog(@"done");    });
The execution result is as follows:

2014-10-06 14:49:29.929 GCD[2197:111113] blk22014-10-06 14:49:29.929 GCD[2197:111112] blk12014-10-06 14:49:29.929 GCD[2197:111111] blk02014-10-06 14:49:29.941 GCD[2197:111019] done
Because the append processing to the Global Dispatch Queue is Concurrent Dispatch Queue and multiple threads are executed in parallel, the execution sequence of the append processing is not fixed. The execution will change, but the done of the execution results must be the final output.

Regardless of the append processing to the Dispatch Queue, the end of these processing can be monitored using the Dispatch Group. Once it is detected that all processing operations are completed, you can append the completed processing to the Dispatch Queue. This is why Dispatch Group is used.

First, the dispatch_group_create function generates a Dispatch Group of the dispatch_group_t type. As shown in create in the dispatch_group_create function name.

The dipatch_group_async function is the same as the dispatch_async function, and the Block is appended to the specified Dispatch Queue. Different from the dispatch_async function, specify the generated Dispatch Group as the first parameter. The specified Block belongs to the specified Dispatch Group.

When all the processes appended to the Dispatch Group are executed, the dispatch_group_notify function used in the source code will append the executed Block to the Dispatch Queue, specify the first parameter as the Dispatch Group to be monitored. When all the operations appended to the Dispatch Group are completed, append the Block of the third parameter to the Dispatch Queue of the second parameter. No matter what type of Dispatch Queue is specified in the dispatch_group_notify function, all processing of the Dispatch Group is completed when the specified Block is appended.

In addition, you can also use the dispatch_group_wait function in the Dispatch Group to wait until all processing ends.

dispatch_queue_t queue=dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);    dispatch_group_t group=dispatch_group_create();        dispatch_group_async(group, queue, ^{        NSLog(@"blk0");    });    dispatch_group_async(group, queue, ^{        NSLog(@"blk1");    });    dispatch_group_async(group, queue, ^{        NSLog(@"blk2");    });        dispatch_group_wait(group, DISPATCH_TIME_FOREVER);

The second parameter of the dispatch_group_wait function is specified as the waiting time (timeout), which belongs to the dispatch_time_t type value. DISPATCH_TIME_FOREVER, which means waiting permanently. As long as the processing of the Dispatch Group has not been completed, it will wait until it cannot be canceled.

As shown in the dispatch_after function description, the following processing should be performed when the specified wait interval is 1 second.

Dispatch_time_t time = dispatch_time (DISPATCH_TIME_NOW, 1ull * NSEC_PER_SEC); long result = dispatch_group_wait (group, time); if (result = 0) {// processing all ends} else {// a processing is still in progress}

If the return value of the dispatch_group_wait function is not 0, it means that although the specified time has elapsed, a processing of the Dispatch Group is still in progress. If the returned value is 0, all operations are completed. When the wait time is DISPATCH_TIME_FOREVER, Which is returned by the dispatch_group_wait function, the processing of all operations belonging to the Dispatch Group must end, so the return value is always 0.

If you specify DISPATCH_TIME_NOW, you can determine whether the processing of a Dispatch Group is completed without waiting.

long result=dispatch_group_wait(group,DISPATCH_TIME_NOW)
In each loop of the RunLoop of the main thread, you can check whether the execution is finished, so that no extra waiting time is consumed, we recommend that you append the dispatch_group_notify function to the Main Dispatch Queue. This is because the dispatch_group_notify function can simplify the source code.

Dispatch_barrier_async

When accessing a database or file, as described above, using Serial Dispatch Queue can avoid data competition.
Write processing cannot be executed in parallel with other write processing and some other processing that includes read processing. However, if the read processing is only executed in parallel with the read processing, then multiple parallel executions will not cause problems.

That is to say, to ensure efficient access, the read processing is appended to the Concurrent Dispatch Queue, and the write processing is appended to the Serial Dispatch Queue in any status where the read processing is not executed.

Although the Dispatch Group and dispatch_set_target_queue functions can also be used, the source code is complex.

GCD provides us with a smarter solution-the dispatch_barrier_async function. This function is used together with the Concurrent Dispatch Queue generated by the dispatch_queue_create function.

First, the dispatch_queue_create function generates the Concurrent Dispatch Queue, and append the read operation in dispatch_async.

    dispatch_queue_t queue=dispatch_queue_create("com.example.gcd.ForBarrier", DISPATCH_QUEUE_CONCURRENT);    dispatch_async(queue, blk0_for_reading);    dispatch_async(queue, blk1_for_reading);    dispatch_async(queue, blk2_for_reading);    dispatch_async(queue, blk3_for_reading);
Perform write processing between blk1 and blk2, and read the written content to blk2 for processing and subsequent processing.
    dispatch_queue_t queue=dispatch_queue_create("com.example.gcd.ForBarrier", DISPATCH_QUEUE_CONCURRENT);    dispatch_async(queue, blk0_for_reading);    dispatch_async(queue, blk1_for_reading);        dispatch_barrier_async(queue, blk_for_writing);        dispatch_async(queue, blk2_for_reading);    dispatch_async(queue, blk3_for_reading);
As shown above, the method is very simple. Use the dispatch_barrier_async function instead of the dispatch_async function.

Dispatch_sync

The async of the dispatch_async function means that the specified Block is not synchronized to the specified Dispatch Queue. The dispatch_async function does not wait.

Since there is async and sync, The dispatch_sync function will wait until the append Block ends.

Let's assume that when we execute the Main Dispatch Queue, we use another thread Global Dispatch Queue for processing, and use the obtained result immediately after the processing is complete. In this case, use the dispatch_sync function.

Dispatch_queue_t queue = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); dispatch_sync (queue, ^ {NSLog (@ "processing ");});
In addition, the dispatch_barrier_async function contains async, And the dispatch_barrier_sync function is also available. The dispatch_barrier_async function is used to wait until all the append operations are completed and then append them to the Dispatch Queue. In addition, it is the same as the dispatch_sync function and will wait until the append processing is completed.

Dispatch_apply

The dispatch_apply function is the Association API between the dispatch_sync function and the Dispatch Group. This function appends the specified Block to the specified Dispatch Queue Based on the specified number of times, and waits until all processing ends.

    dispatch_queue_t queue=dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);    dispatch_apply(10, queue, ^(size_t index){        NSLog(@"%zu",index);    });    NSLog(@"done");
The execution result is

2014-10-07 10:18:18.510 GCD[612:60b] 02014-10-07 10:18:18.510 GCD[612:3503] 22014-10-07 10:18:18.513 GCD[612:60b] 42014-10-07 10:18:18.510 GCD[612:3403] 32014-10-07 10:18:18.513 GCD[612:60b] 62014-10-07 10:18:18.514 GCD[612:60b] 82014-10-07 10:18:18.514 GCD[612:60b] 92014-10-07 10:18:18.513 GCD[612:3503] 52014-10-07 10:18:18.510 GCD[612:1303] 12014-10-07 10:18:18.513 GCD[612:3403] 72014-10-07 10:18:18.515 GCD[612:60b] done
Because processing is performed in the Global Dispatch Queue, the execution time of each processing is not fixed. However, the last done in the output result must be at the last position. This is because the dispatch_apply function will wait until all processing is completed.

The first parameter is the number of repetitions, the second parameter is the Dispatch Queue of the append object, and the third parameter is the append processing. Unlike the example so far, the Block of the third parameter is a Block with parameters. This is used to repeatedly append Blocks Based on the first parameter and differentiate blocks.

Dispatch_suspend/dispatch_resume

When a large number of append operations are performed to the Dispatch Queue, the append operations may not be performed.

In this case, you only need to suspend the Dispatch Queue. It can be restored after execution.

Suspended Function

dispatch_suspend(queue);
Restored Function

dispatch_resume(queue);
These functions have no impact on the processed data. After being suspended, the process is appended to the Dispatch Queue but not executed. After that, the process is stopped. Recovery allows these operations to continue.

Dispatch_once

The dispatch_once function is an API that only executes the specified processing once during application execution. The following Frequently-seen source code for initialization can be simplified through the dispatch_once function.

Static int initialized = NO; if (initialized = NO) {// initialize initialized = YES ;}
If the dispatch_once function is used, the source code is

Static dispatch_once_t pred; dispatch_once (& pred, ^ {// initialization });
The source code does not seem to change much, but through the dispatch_once function, the source code can be fully secure even if it is executed in a multi-threaded environment.
The previous code is secure in most cases. However, in a multi-core CPU, it is possible to execute initialization multiple times when a variable indicating Initialization is being updated.

GCD implements Dispatch Source

In addition to the main Dispatch Queue, GCD also has a less noticeable Disaptch Source. It is a package of kqueue which is used by BSD kernel.

Kqueue is a technique used by application programmers to perform processing when various events occur in the XNU kernel. Its CPU load is very small and it does not occupy resources as much as possible.

Example of a timer using DISPATCH_SOURCE_TYPE_TIMER. This example can be used when network programming communication times out.

// Specify the worker as the Dispatch Source // when the timer goes through the specified time, set the Main Dispatch Queue as the Dispatch Queue dispatch_source_t timer = dispatch_source_create (DISPATCH_SOURCE_TYPE_TIMER, 0, 0, dispatch_get_main_queue (); // After the timer is set to 2 seconds, // The timer is not specified as a duplicate. // 1 second delay (timer, dispatch_time (DISPATCH_TIME_NOW, 2ull * NSEC_PER_SEC) is allowed ), DISPATCH_TIME_FOREVER, 1ull * NSEC_PER_SEC); // specify the timer time Dispatch_source_set_event_handler (timer, ^ {NSLog (@ "wakeup! "); Dispatch_source_cancel (timer) ;}); // specifies the dispatch_source_set_cancel_handler (timer, ^ {NSLog (@" canceled ") when the Dispatch Source is canceled "); // dispatch_release (timer) ;}); // start Dispatch Source dispatch_resume (timer );
In fact, Dispatch Queue does not have the concept of "cancel. Once a process is appended to the Dispatch Queue, there is no way to remove the process or cancel the process during execution. If you want to cancel, consider NSOperationQueue and other methods.
The Dispatch Source and Dispatch Queue are different and can be canceled. In addition, the processing that must be executed during cancellation can be specified as the callback Block form. Therefore, it is easier to use Dispatch Source to process events in the XNU kernel than to use kqueue directly.

Example

Http://download.csdn.net/detail/gwh111/8008367




Why do you want to learn Japanese in 30 minutes?

In 30 minutes, it will be nice to teach you 50 voices in Japanese, and you will not be able to learn the language quickly. Don't trust what the speed is. Even the fastest language-learning infant stage requires at least three years of fluency in the mother tongue.

For broadband acceleration, I learned how to repeat it once in a minute-is it true or false?

That is only valid for the xp Professional Edition and does not work for other versions. Besides, for your bandwidth, your bandwidth is determined, such as your 2 m network cable, the download speed is 233 faster.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.