label:
[52 effective ways to write high-quality iOS code] (ten) Grand Central Dispatch (GCD)
Reference book: "Effective Objective-C 2.0" [English] Matt Galloway
Sneak preview
41. Use more dispatch queues and less sync locks
42. Use GCD more and less performSelector
43. Master the use of GCD and operation queues
44. Through the Dispatch Group mechanism, perform tasks according to the status of system resources
45. Use dispatch_once to execute thread-safe code that only needs to run once
46. Don't use dispatch_get_current_queue
table of Contents
52 effective ways to write high-quality iOS codeGrand Central DispatchGCD
Sneak preview
table of Contents
Article 41: Use dispatch queues less frequently
Article 42: Use more GCD and less performSelector
Article 43 Master the timing of using GCD and operation queues
Article 44 Perform tasks based on system resource status through the Dispatch Group mechanism
Item 45 uses dispatch_once to execute thread-safe code that only needs to run once
Article 46 Don't use dispatch_get_current_queue
Article 41: Use more dispatch queues and less synchronization locks
In Objective-C, if multiple threads want to execute the same code, sometimes problems can occur. In this case locks are usually used to implement some kind of synchronization mechanism. Before the advent of GCD, there were two methods:
// use built-in synchronization block @synchronized
-(void) synchronizedMethod {
@synchronized (self) {
// Safe
}
}
// use NSLock object
_lock = [[NSLock alloc] init];
-(void) synchronizedMethod {
[_lock lock];
// Safe
[_lock unlock];
}
Both methods are good, but both have flaws. Synchronizing blocks will reduce the efficiency of the code. For example, in this case, a lock is placed on the self object, and the program may have to wait for another piece of code that has nothing to do with it to continue executing the current code. If you directly use the lock object, once you encounter a deadlock, it will be very troublesome.
The alternative is to use GCD. The following takes the developer's own access to atomic properties as an example:
// Implemented with a sync block
-(NSString *) someString {
@synchronized (self) {
return _someString;
}
}
-(void) setSomeString: (NSString *) someString {
@synchronized (self) {
_someString = someString;
}
}
// Implemented with GCD
// create a serial queue
_syncQueue = dispatch_queue_create ("com.effectiveobjectivec.syncQueue", NUll);
-(NSString *) someString {
__block NSString * localSomeString;
dispatch_sync (_syncQueue, ^ {
localSomeString = _someString;
});
return localSomeString;
}
-(void) setSomeString: (NSString *) someString {
dispatch_sync (_syncQueue, ^ {
_someString = someString;
});
}
If there are many attributes using @synchronized (self), then the synchronization block of each attribute must be executed after all other synchronization blocks have been executed, and this may not ensure thread safety, such as multiple times on the same thread When the getter method is called, the results obtained each time may not be the same. Between the two access operations, other threads may write new property values.
In this example, the GCD code uses a serial synchronization queue, and read operations and write operations are arranged in the same queue to ensure data synchronization.
You can further optimize the code according to actual needs, for example, let the read operations of attributes can be performed concurrently, but the write operations must be performed separately:
// create a concurrent queue
_syncQueue = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
-(NSString *) someString {
__block NSString * localSomeString;
dispatch_sync (_syncQueue, ^ {
localSomeString = _someString;
});
return localSomeString;
}
-(void) setSomeString: (NSString *) someString {
// Put the write operation in the asynchronous fence block for execution
// Note: barrier represents a fence. If the concurrent queue finds that the next block to be processed is a fence block, it will wait for the current concurrent block to be executed before executing the fence block separately. After executing the fence block, continue to the normal way下 处理。 Under processing.
dispatch_barrier_async (_syncQueue, ^ {
_someString = someString;
});
}
Article 42: Use GCD more and less performSelector
Objective-C is essentially a very dynamic language. NSObject defines several methods so that developers can call any method at will. These methods can delay the execution of the method call, or specify all the threads that run the method. The simplest of these is performSelector:
-(id) performSelector: (SEL) selector
// performSelector method is equivalent to calling the selector directly, so the following two lines of code perform the same
[object performSelector: @selector (selectorName)];
[object selectorName];
If the selector is determined at runtime, then the power of performSelector can be reflected, which is equivalent to implementing dynamic binding again on top of dynamic binding.
SEL selector;
if (/ * some contidion * /) {
selector = @selector (foo);
} else {
selector = @selector (bar);
}
[object performSelector: selector];
But if you do this, if you compile the code in ARC, the compiler will issue a warning that performSelector may cause a memory leak. The reason is that the compiler doesn't know what selector will be called. So there is no way to use ARC's memory management rules to determine if the returned value should be released. In view of this, ARC takes a more cautious operation, that is, does not add a release operation, and a memory leak occurs when the method returns the object when it has been reserved.
Another limitation of the performSelector method is that it can only accept a maximum of two parameters, and the type of the accepted parameter must be an object. The following are common methods of performSelector series:
// Delay execution
-(id) performSelector: (SEL) selector withObject: (id) argument afterDelay: (NSTimeInterval) delay
// executed by a thread
-(id) performSelector: (SEL) selector onThread: (NSThread *) thread withObject: (id) argument waitUntilDone: (BOOL) wait
// executed by the main thread
-(id) performSelectorOnMainThread: (SEL) selector withObject: (id) argument waitUntilDone: (BOOL) wait
These methods can be replaced by GCD:
// Delay execution method
// use performSelector
[self performSelector: @selector (doSomething) withObject: nil afterDelay: 5.0];
// use GCD
dispatch_time_t time = dispatch_time (DISPATCH_TIME_NOW, (int64_t) (5.0 * NSEC_PER_SEC));
dispatch_after (time, dispatch_get_main_queue (), ^ (void) {
[self doSomething];
});
// execute the method in the main thread
// use performSelector
[self performSelectorOnMainThread: @selector (doSomething) withObject: nil waitUntilDone: NO];
// use GCD if waitUntilDone is YES, use dispatch_sync
dispatch_async (dispatch_get_main_queue (), ^ {
[self doSomething];
});
Article 43: Master the timing of using GCD and operation queues
GCD technology is really great, but sometimes it is better to use components of the standard system library. The synchronization mechanism of GCD technology is very good, and it is most convenient to use GCD for code that only needs to be executed once. However, you can also use the operation queue (NSOperationQueue) when performing background tasks.
There are many differences between the two. The biggest difference is that GCD is a pure C API, and operation queues are Objective-C objects. In GCD, tasks are represented by blocks, which is a lightweight data structure, while operations (NSOperation) is a more heavyweight Objective-C object. You need to weigh the overhead of objects and the benefits of using full objects to weigh which technology to use.
Advantages of operation queues:
1. Cancel is called directly on the NSOperation object to cancel the operation, and once GCD dispatches a task, it cannot be canceled.
2. You can specify the dependency relationship between operations, so that a specific operation must be performed after another operation is completed.
3. You can monitor the changes of the attributes of the NSOperation object (isCancelled, isFinished, etc.) through KVO (key value observation).
4. You can specify the priority of the operation
5. You can achieve richer functions by reusing NSOperation objects
To determine which solution is best, it's best to test performance.
Article 44: Use the Dispatch Group mechanism to perform tasks based on system resource conditions
Dispatching group is a feature of GCD that can group tasks. The caller can wait for the execution of this set of tasks to complete, or continue to execute after providing a callback function. The caller will be notified when this set of tasks is completed.
If you want to make each object in a container execute a task and wait for all tasks to complete, then you can use this GCD feature to achieve:
dispatch_queue_t queue = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
// create dispatch group
dispatch_group_t group = dispatch_group_create ();
for (id object in colletion) {
// dispatch task
dispatch_group_async (group, queue, ^ {
[object performTask];
});
}
// Wait for the tasks in the group to complete
dispatch_group_wait (group, DISPAfy function instead of wait
// The queue used in the notify callback can be determined according to the situation, here is the main queue
dispatch_group_notify (group, dispatch_get_main_queue (), ^ {
// continue after the task is completed
});
You can also execute some tasks on a higher priority thread, while all tasks still belong to a group
// create two queues with different priorities
dispatch_queue_t lowPriorityQueue = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_LOW, 0);
dispatch_queue_t HighPriorityQueue = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_HIGH, 0);
// create dispatch group
dispatch_group_t group = dispatch_group_create ();
for (id object in lowPriorityColletion) {
dispatch_group_async (group, lowPriorityQueue, ^ {
[object performTask];
});
}
for (id object in HighPriorityColletion) {
dispatch_group_async (group, HighPriorityQueue, ^ {
[object performTask];
});
}
dispatch_group_notify (group, dispatch_get_main_queue (), ^ {
// continue after the task is completed
});
The dispatch group is not required. To compile a container and perform tasks on each of its elements, you can use the apply function. Here is an example of an array:
dispatch_queue_t queue = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_apply (array.count, queue, ^ (size_t i) {
id object = array [i];
[object performTask];
});
But using the apply function will continue to block until all tasks have been executed. It can be seen that if the block is dispatched to the current queue (or a serial queue higher than the current queue in the system), it will cause a deadlock. If you want to perform tasks in the background, you should use dispatch groups.
Item 45: Use dispatch_once to execute thread-safe code that only needs to run once
The common implementation of the singleton pattern is to write a method named sharedInstance in the class, which will only return a singleton instance common to the entire class, rather than creating a new instance each time it is called. The following is the implementation of the singleton pattern using synchronized blocks:
@implementation EOCClass
+ (id) sharedInstance {
static EOCClass * sharedInstance = nil;
@synchronized (self) {
if (! sharedInstance) {
sharedInstance = [[self alloc] init];
}
}
return sharedInstance;
}
@end
GCD introduces a feature that makes singleton implementation easier
@implementation EOCClass
+ (id) sharedInstance {
static EOCClass * sharedInstance = nil;
// each call must use the exact same flag, the flag needs to be declared
static dispatch_once_t onceToken;
// will only happen once
dispatch_once (& onceToken, ^ {
sharedInstance = [[self alloc] init];
});
return sharedInstance;
}
@end
dispatch_once simplifies the code and completely guarantees thread safety. All issues are handled by GCD at the bottom. And dispatch_once is more efficient, it does not use heavyweight synchronization mechanisms.
Article 46: Don't use dispatch_get_current_queue
When using GCD, it is often necessary to determine which queue the current code is executing on. The dispatch_get_current_queue function returns the queue that is currently executing code. However, it has been abolished by iOS and Mac OS X. Avoid using it.
Deadlocks can occur with synchronous queue operations:
_syncQueue = dispatch_queue_create ("com.effectiveobjectivec.syncQueue", NUll);
-(NSString *) someString {
__block NSString * localSomeString;
dispatch_sync (_syncQueue, ^ {
localSomeString = _someString;
});
return localSomeString;
}
-(void) setSomeString: (NSString *) someString {
dispatch_sync (_syncQueue, ^ {
_someString = someString;
});
}
If the queue where the getter method is called happens to be the queue (_syncQueue) for the synchronization operation, then dispatch_sync will not return until the block execution is completed. However, the target queue that should execute the block is the current queue, and it has been blocking, waiting for the target queue to finish executing the block. As a result, a deadlock occurs.
At this time, it may be solved with dispatch_get_current_queue (in this case, it is better to ensure that the queue used by the synchronous operation will never access the property)
-(NSString *) someString {
__block NSString * localSomeString;
dispatch_block_t block = ^ {
localSomeString = _someString;
};
// If the queue of the execution block is _syncQueue, then the direct execution block is not dispatched
if (dispatch_get_current_queue () == _syncQueue) {
block ();
}
else {
dispatch_sync (_syncQueue, block);
}
return localSomeString;
}
However, this method can only deal with some simple cases. If you encounter the following conditions, you still have the risk of deadlock:
dispatch_queue_t queueA = dispatch_queue_create ("com.effectiveobjectivec.queueA", NULL);
dispatch_queue_t queueB = dispatch_queue_create ("com.effectiveobjectivec.queueB", NULL);
dispatch_sync (queueA, ^ {
dispatch_sync (queueB, ^ {
dispatch_sync (queueA, ^ {
// deadlock
});
});
});
Because this operation is for queueA, it must wait for the outermost dispatch_sync to complete, and the outermost dispatch_sync cannot be completed, because it waits until the innermost dispatch_sync is completed, so it is deadlocked .
If you try to solve it with dispatch_get_current_queue:
dispatch_sync (queueA, ^ {
dispatch_sync (queueB, ^ {
dispatch_block_t block = ^ {/ * ... * /};
if (dispatch_get_current_queue () == queueA) {
block ();
}
else {
dispatch_sync (queueA, block);
}
});
});
This will still deadlock, because dispatch_get_current_queue () returns the current queue, queueB. In this case, the synchronous dispatch operation for queueA will still be performed, so it will also deadlock.
Since the dispatch queue is organized hierarchically, this means that blocks that are queued in a queue are executed in their parent queue. Hierarchical relationships between queues can cause this method of checking if the current queue is the one used to perform synchronous dispatch, which does not always work.
To solve this problem, the best way is to set the queue-specific data through the function provided by GCD. This function can associate arbitrary data into the queue in the form of key-value pairs. Most importantly, if no related data is available, the system looks up the hierarchy until it finds the data or reaches the root queue.
dispatch_queue_t queueA = dispatch_queue_create ("com.effectiveobjectivec.queueA", NULL);
dispatch_queue_t queueB = dispatch_queue_create ("com.effectiveobjectivec.queueB", NULL);
static int kQueueSpecific;
CFStringRef queueSpecificValue = CFSTR ("queueA");
// Set queue-specific values on queueA
dispatch_queue_set_specific (queueA, & kQueueSpecific, (void *) queueSpecificValue, (dispatch_function_t) CFRelease);
dispatch_sync (queueB, ^ {
dispatch_block_t block = ^ {NSLog (@ "No deadlock!");};
CFStringRef retrievedValue = dispatch_get_specific (& kQueueSpecific);
if (retrievedValue) {
block ();
}
else {
dispatch_sync (queueA, block);
}
});
The first parameter of the dispatch_queue_set_specific function is the queue to be set, and the next two parameters are the key and value (the function compares keys by pointer value, not their content). The last parameter is the destructor. The dispatch_function_t type is defined as follows:
typedef void (* dispatch_function_t) (void *)
This example passes CFRelease as a parameter to clean up the old value.
This simple and easy mechanism provided by queue-specific data avoids the pitfalls often encountered when using dispatch_get_current_queue. However, you can use dispatch_get_cu when debugging a programrrent_queue is an obsolete method, just don't compile it into a distribution program.
[52 effective ways to write high-quality iOS code] (ten) Grand Central Dispatch (GCD)