GCD introduction (II): multi-core performance

Source: Internet
Author: User

Concept

To make full use of the advantages of multiple cores in a single process, we need to use multi-threaded Technology (we do not need to mention multiple processes, which is irrelevant to gcd ). At the lower layer, gcd global dispatch queue is only the abstraction of the worker thread pool. Once the blocks in these queues are available, they will be dispatched to the working thread. The block that is submitted to the user queue will eventually enter the same worker thread pool through the Global Queue (unless your user queue is targeting the main thread, but in order to improve the running speed, we will never do that ).

There are two ways to "squeeze" the performance of multiple core systems through GCD: To run a single task or a group of related tasks in a global queue; concurrent operations on multiple unrelated tasks or closely related tasks to the user queue;

Global queue

Imagine the following cycle:

 
For (id obj in array) [self dosomethingintensivewith: OBJ];

Assume that-Dosomethingintensivewith:It is thread-safe and can execute multiple elements at the same time. An array usually contains multiple elements. In this way, we can easily use GCD for parallel operations:

 
Dispatch_queue_t queue = dispatch_get_global_queue (queue, 0); For (id obj in array) dispatch_async (queue, ^ {[self dosomethingintensivewith: OBJ];});

This is so simple that we have been running this section on multiple coresCode.

Of course this code is not perfect. Sometimes we have a piece of code to operate an array like this, but after the operation is complete, we also need to perform other operations on the operation results:

 
For (id obj in array) [self dosomethingintensivewith: OBJ]; [self dosomethingwith: array];

In this case, gcdDispatch_asyncIt's a tragedy. We can't simply use it.Dispatch_sync to solve this problemThis will cause every iterator to block and completely destroy parallel computing.

One way to solve this problem is to use dispatch group. A dispatch group can be used to group multiple blocks to monitor the messages sent when all these blocks are completed or when all the blocks are completed. Use the dispatch_group_create function to create a block. Then use the dispatch_group_async function to submit the block to a dispatch queue and add them to a group. So now we can re-code:

Dispatch_queue_t queue = Queue (queue, 0); dispatch_group_t group = dispatch_group_create (); For (id obj in array) dispatch_group_async (group, queue, ^ {[self resume: OBJ];}); dispatch_group_wait (group, dispatch_time_forever); dispatch_release (Group); [self dosomethingwith: array];

If these tasks can be executed asynchronously, we can use the Function-Dosomethingwith: executed in the background. We use the dispatch_group_async function to create a block and execute it after the group is complete:

 
Dispatch_queue_t queue = Queue (queue, 0); dispatch_group_t group = dispatch_group_create (); For (id obj in array) dispatch_group_async (group, queue, ^ {[self resume: OBJ];}); dispatch_group_notify (group, queue, ^ {[self dosomethingwith: array] ;}); dispatch_release (group );

Not only will all array elements be operated in parallel, but subsequent operations will also be performed asynchronously, and these asynchronous operations willProgramTake into account other parts. Note that if-Dosomethingwith: it needs to be executed in the main thread, for example, to operate the GUI, we only need to pass the main queue instead of the global queue to the dispatch_group_notify function.

 

For synchronous execution, gcd provides a simplified method called dispatch_apply. This function calls a single block multiple times, performs parallel operations, and waits until all operations are completed, as we want:

 
Dispatch_queue_t queue = Queue (queue, 0); dispatch_apply ([array count], queue, ^ (size_t index) {[self dosomethingintensivewith: [array objectatindex: Index];}); [self dosomethingwith: array];

This is great, but what about Asynchronization? The dispatch_apply function does not have an asynchronous version. But we are using an asynchronous Api! So we only need to use the dispatch_async function to push all the code to the background:

Dispatch_queue_t queue = Queue (queue, 0); dispatch_async (queue, ^ {dispatch_apply ([array count], queue, ^ (size_t index) {[self counter: [array objectatindex: index] ;}); [self dosomethingwith: array] ;});

It's easy!

 

The key to this method is to determine that our code is performing similar operations on different data fragments at a time. If you are sure that your task is thread-safe (not covered in this article), you can use GCD to overwrite your loop, making it more parallel and cool.

To see performance improvement, you have to do a lot of work. Compared with the thread, gcd is lightweight and low-load, but it still consumes resources to submit the block to the queue-the block needs to be copied and queued, and the appropriate working thread needs to be notified. Do not submit each pixel of an image as a block to the queue. The advantage of GCD is half the way. If you are not sure, perform the test. Parallel Computing of programs is an optimization measure. You have to think twice before modifying the code. It is helpful to make sure the modifications are correct ).

Subsystem concurrent operations

In the previous chapter, we discussed how to take advantage of multiple cores in a single subsystem of a program. We have to span multiple subsystems.

For example, imagine a program to open a document containing meta information. The document data needs to be parsed and converted to the model object for display. meta information also needs to be parsed and converted. However, the document data and meta information do not need to be interacted. We can create a dispatch queue for each document and Meta, and then execute it concurrently. The parsing codes of documents and meta are executed in serial mode, so thread security is not considered (as long as there is no data shared between documents and Meta), but they are still executed.

Once the document is opened, the program must respond to user operations. For example, you may need to perform a spelling check, code highlighting, Word Count statistics, automatic saving, or anything else. If each task is implemented to be executed in different dispatch queue, the tasks are executed concurrently and the operations of other tasks are considered separately (respect to each other ), this saves the trouble of multi-threaded programming.

Using dispatch source (I will talk about it next time), we can let GCD pass the event directly to the user queue. For example, the code in the program that monitors socket connections can be placed in its own dispatch queue, so that it will be executed asynchronously, and the operation of other parts of the program will be taken into account during execution. In addition, if user queues are used, this module will be executed in serial to simplify the program.

Conclusion

We discussed how to use GCD to improve program performance and take advantage of multi-core systems. Although we need to write concurrent programs with caution, gcd allows us to make full use of the available computing resources of the system.

In the next article, we will discuss the dispatch source, that is, the mechanism for GCD to monitor internal and external events.

 

Http://www.dreamingwish.com/dream-2012/of-of-of-performance-of-of-of-of-of-of-of-gcd-introduced-ba-the-multi-core.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.