In-depth introduction to cocoa multi-thread programming block and dispatch quene Luo chaohui (http://blog.csdn.net/kesalinCC license, reproduced please indicate the source)
Block is a new syntax feature that Apple expands in GCC 4.2 to support multi-core parallel programming. We can combine dispatch_queue and block to facilitate multi-thread programming.
Download the source code of this article: Click to download
1. Test Project Preparation
In xcode 4.0, we create a command line tool of the Mac OS X application type. In the type field, select Foundation. The project name is studyblocks. the default project code main. M content is as follows:
int main (int argc, const char * argv[]){ NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; // insert code here... NSLog(@"Hello, World!"); [pool drain]; return 0;}
2. How to Write a block
In the automatically generated project code, a statement "Hello, world!" is printed by default! ", Can this task be implemented using block syntax? The answer is yes. See:
void (^aBlock)(void) = ^(void){ NSLog(@"Hello, World!"); }; aBlock();
Replace nslog (@ "Hello, world! "); Statement, compile and run, and the result is the same.
What do these two statements mean? First, void (^ ablock) (void) on the left of the equal sign indicates that a block is declared. This block does not contain a parameter (void) and no response parameter (void ); the ^ (void) {} structure on the right of the equal sign represents the implementation body of a block. The specific tasks to be done for this block are all. Here we just print a statement. The entire statement is to declare a block and assign values to it. The second statement is to call this block to do the actual thing, just like calling a function. Block is a bit like a Lambda expression in c ++ 0x.
We can also write:
void (^aBlock)(void) = 0; aBlock = ^(void){ NSLog(@" >> Hello, World!"); }; aBlock();
Now we know how to write a block. What about the block array? It is also very simple. Please refer:
void (^blocks[2])(void) = { ^(void){ NSLog(@" >> This is block 1!"); }, ^(void){ NSLog(@" >> This is block 2!"); } }; blocks[0](); blocks[1]();
Remember!
Block is allocated to the stack, which means that we must handle the block lifecycle in our mind.
For example, the following method is incorrect because the block allocated by the stack is valid in if or else, but it may be invalid when you exit braces:
dispatch_block_t block; if (x) { block = ^{ printf("true\n"); }; } else { block = ^{ printf("false\n"); }; } block();
The above code is equivalent to the following Unsafe code:
if (x) { struct Block __tmp_1 = ...; // setup details block = &__tmp_1; } else { struct Block __tmp_2 = ...; // setup details block = &__tmp_2; }
3. How to modify external variables in a block
The purpose of block is to support parallel programming. For common local variables, we cannot modify them randomly in the block. (The reason is simple: the block can be run in parallel by multiple threads, if you modify the common local variable in the block, the compiler reports an error. How can I modify external variables? There are two methods: the first is to modify the static global variable, and the second is to modify the variable modified with the New Keyword _ block. See:
__block int blockLocal = 100; static int staticLocal = 100; void (^aBlock)(void) = ^(void){ NSLog(@" >> Sum: %d\n", global + staticLocal); global++; blockLocal++; staticLocal++; }; aBlock(); NSLog(@"After modified, global: %d, block local: %d, static local: %d\n", global, blockLocal, staticLocal);
Similarly, we can also reference static block or _ block. For example, we can use them to implement block recursion:
// 1 void (^aBlock)(int) = 0; static void (^ const staticBlock)(int) = ^(int i) { if (i > 0) { NSLog(@" >> static %d", i); staticBlock(i - 1); } }; aBlock = staticBlock; aBlock(5); // 2 __block void (^blockBlock)(int); blockBlock = ^(int i) { if (i > 0) { NSLog(@" >> block %d", i); blockBlock(i - 1); } }; blockBlock(5);
4. We have introduced block and its basic usage, but it does not involve parallel programming. Combining block and dispatch queue distribution queues is a tool for concurrent programming in IOS. See the Code:
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; initData(); // create dispatch queue // dispatch_queue_t queue = dispatch_queue_create("StudyBlocks", NULL); dispatch_async(queue, ^(void) { int sum = 0; for(int i = 0; i < Length; i++) sum += data[i]; NSLog(@" >> Sum: %d", sum); flag = YES; }); // wait util work is done. // while (!flag); dispatch_release(queue); [pool drain];
The block above simply sums the array. First, we create a Serial Distribution queue and add a block task to it for Parallel Running. This way, the block will run in the new thread until the main thread is returned. Pay attention to the use of flag. Flag is static, so we can modify it in the block. Statement while (! Flag); to ensure that the main thread does not end before the thread where the blcok is located.
Dispatch_queue_t is defined as follows:
Typedef void (^ dispatch_block_t) (void );
This means that the block added to dispatch_queue must have no parameters or return values.
Dispatch_queue_create is defined as follows:
Dispatch_queue_t dispatch_queue_create (const char * label, dispatch_queue_attr_t ATTR );
This function has two parameters: a string used to identify dispatch_queue, and a reserved dispatch_queue attribute. Set it to null.
We can also use
Dispatch_queue_t dispatch_get_global_queue (long priority, unsigned long flags );
To obtain the global dispatch_queue. The parameter priority indicates the priority. It is worth noting that we cannot modify the dispatch_queue returned by this function.
The dispatch_async function is defined as follows:
Void dispatch_async (dispatch_queue_t queue, dispatch_block_t block );
It adds a block to a dispatch_queue, which runs in parallel when it is scheduled again.
The corresponding dispatch_sync function is executed synchronously, which is rarely used. For example, if we change the above Code to dispatch_sync, we do not need to write the flag synchronization code.
5. Mechanism of dispatch_queue and synchronization between threads
We can use the dispatch_async function to submit many blocks instances to the dispatch_queue for serial operation. These blocks instances are scheduled according to the FIFO (first-in-first-out) rule, that is, they are first added for execution, and then added for execution, but at a certain time point, multiple blocks may be executed simultaneously.
In the above example, our main thread has been polling the flag to know whether the block thread has finished execution. This is very inefficient and seriously wastes CPU resources. We can use some communication mechanisms to solve this problem, such as semaphore ). The principle of semaphore is very simple, that is, the production-consumption mode. When there is no resource, I will do nothing until the resource is ready. Let's look at the code below:
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; initData(); // Create a semaphore with 0 resource // __block dispatch_semaphore_t sem = dispatch_semaphore_create(0); // create dispatch semaphore // dispatch_queue_t queue = dispatch_queue_create("StudyBlocks", NULL); dispatch_async(queue, ^(void) { int sum = 0; for(int i = 0; i < Length; i++) sum += data[i]; NSLog(@" >> Sum: %d", sum); // signal the semaphore: add 1 resource // dispatch_semaphore_signal(sem); }); // wait for the semaphore: wait until resource is ready. // dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); dispatch_release(sem); dispatch_release(queue); [pool drain];
First, we create a _ block semaphore and set its initial resource value to 0 (not less than 0). In this case, the task has not been completed and no resources are available for the main thread. Then, after the block task is completed, use dispatch_semaphore_signal to increase the semaphore count (which can be understood as the number of resources), indicating that the task is completed and that resources can be done by the main thread. In the main thread, dispatch_semaphore_wait reduces the semaphore count. If the number of resources is less than 0, it indicates that the resources are not ready. I have to wait for the resources to be ready according to the FIFO (first-first) Rules, once the resource is ready and scheduled, I will execute it again.
6 Example:
Next we will look at an example of synchronizing data in FIFO order and Using semaphore: first sum the array and then subtract the array in turn.
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; initData(); __block int sum = 0; // Create a semaphore with 0 resource // __block dispatch_semaphore_t sem = dispatch_semaphore_create(0); __block dispatch_semaphore_t taskSem = dispatch_semaphore_create(0); // create dispatch semaphore // dispatch_queue_t queue = dispatch_queue_create("StudyBlocks", NULL); dispatch_block_t task1 = ^(void) { int s = 0; for (int i = 0; i < Length; i++) s += data[i]; sum = s; NSLog(@" >> after add: %d", sum); dispatch_semaphore_signal(taskSem); }; dispatch_block_t task2 = ^(void) { dispatch_semaphore_wait(taskSem, DISPATCH_TIME_FOREVER); int s = sum; for (int i = 0; i < Length; i++) s -= data[i]; sum = s; NSLog(@" >> after subtract: %d", sum); dispatch_semaphore_signal(sem); }; dispatch_async(queue, task1); dispatch_async(queue, task2); // wait for the semaphore: wait until resource is ready. // dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); dispatch_release(taskSem); dispatch_release(sem); dispatch_release(queue); [pool drain];
In the above Code, we use the FIFO feature of dispatch_queue to ensure that task 1 is executed before Task 2, and task 2 must wait until Task 1 is executed to start doing business, the main thread must wait for Task 2 to do the right thing. In this way, we can ensure that we first seek and subtract, and then let the main thread run to end this order.
7. Use dispatch_apply for concurrent iteration:
For the sum operation above, we can also use dispatch_apply to simplify code writing:
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; initData(); dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); __block int sum = 0; __block int *pArray = data; // iterations // dispatch_apply(Length, queue, ^(size_t i) { sum += pArray[i]; }); NSLog(@" >> sum: %d", sum); dispatch_release(queue); [pool drain];
Note that global dispatch_queue is used here.
Dispatch_apply is defined as follows:
Dispatch_apply (size_t iterations, dispatch_queue_t queue, void (^ block) (size_t ));
The iterations parameter indicates the number of iterations. Void (^ block) (size_t) is the block loop body. What are the advantages of this function compared with the for loop? The answer is: parallelism. The summation here is parallel, not in sequence.
8. Dispatch Group
We can add the blocks that complete a group of related tasks to a dispatch group, so that we can do other tasks after all the block tasks in the group are completed. For example, you can use dispatch group in the example 6:
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; initData(); __block int sum = 0; // Create a semaphore with 0 resource // __block dispatch_semaphore_t taskSem = dispatch_semaphore_create(0); // create dispatch semaphore // dispatch_queue_t queue = dispatch_queue_create("StudyBlocks", NULL); dispatch_group_t group = dispatch_group_create(); dispatch_block_t task1 = ^(void) { int s = 0; for (int i = 0; i < Length; i++) s += data[i]; sum = s; NSLog(@" >> after add: %d", sum); dispatch_semaphore_signal(taskSem); }; dispatch_block_t task2 = ^(void) { dispatch_semaphore_wait(taskSem, DISPATCH_TIME_FOREVER); int s = sum; for (int i = 0; i < Length; i++) s -= data[i]; sum = s; NSLog(@" >> after subtract: %d", sum); }; // Fork dispatch_group_async(group, queue, task1); dispatch_group_async(group, queue, task2); // Join dispatch_group_wait(group, DISPATCH_TIME_FOREVER); dispatch_release(taskSem); dispatch_release(queue); dispatch_release(group); [pool drain];
In the above Code, we use dispatch_group_create to create a dispatch_group_t, and then use the statement: dispatch_group_async (group, queue, task1); Add the block task to the queue and associate it with the group, in this way, we can use dispatch_group_wait (group, dispatch_time_forever); to wait until all the block tasks in the group are completed and continue execution.
So far, we have learned about dispatch queue and block parallel programming, and start to use them in projects,
References:
Concurrency programming guide:
Http://developer.apple.com/library/ios/#documentation/General/Conceptual/ConcurrencyProgrammingGuide/Introduction/Introduction.html