Objective-C 2.0 Study Notes (6)

Source: Internet
Author: User

The following are the notes for learning objective-C 2.0. The original book I purchased is the original English version. Due to my limited level of English, there may be omissions in understanding.

Original book purchase address: Amazon


6. blocks and Grand Central Dispatch


Item 37: Understand Blocks

Some knowledge points:

(1) Block Structure

return_type (^block_name)(parameters)

(2) If self and self are referenced in the block and the block is retain, it will generate a circular reference. The solution is as follows (Replace self with the someobject of _ unsafe_unretained ):

__unsafe_unretained SomeClass *someObject = self;

Remember:

(1) block is a lexical closure in C, C ++, and objective-C.

(2) block parameters and return values are optional.

(3) block can be applied for in heap memory space, stack memory space, or global. A block applied for on the stack can be copied to the heap space, but copied to the heap space, the memory management of this block is the same as the normal object management method of objective-C.


Item 38: Create typedefs for common block types

Some knowledge points:

(1) block typedef

typedef int (^someBlock)(BOOL flag, int value)someBlock block = ^(BOOL flag, int value){};

Remember:

(1) Use typedef to make block variables easier.

(2) define a new typedef through naming rules to avoid conflicts with other typedef.

(3) do not worry about defining too many typedef for the same block signature. You can use a block signature to modify a specified fast type instead of other types.


Item 39: Use handler blocks to reduce code separation

Remember:

(1) it is more effective to use the processing block when creating an object that must be declared in the business logic.

(2) The processing block is better than the proxy method in that it is directly associated with objects. Using a proxy is often monitored by multiple objects.

(3) When using a proxy block to design interfaces, you can consider adding a queue parameter. The queue parameter is because the block needs to be queued for processing.


Item 40: Avoid retain cycles introduced by blocks referencing the object owning them

Remember:

(1) Pay special attention to the potential cyclic reference problems caused by the direct and non-direct retention of objects because the block obtains objects.

(2) ensure that loop references are broken at the most appropriate time, but do not assign the responsibility for breaking the cycle to your API caller.


Item 41: prefer dispatch queues to locks for synchronization

Some knowledge points:

(1) Synchronous queues work with barrier Blocks

// Making the concurrent approach work with barriers_syncQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);// …- (NSString*)someString {    __block NSString *localSomeString;    dispatch_sync(_syncQueue, ^{        localSomeString = _someString;    });    return localSomeString;}- (void)setSomeString:(NSString*)someString {    dispatch_barrier_async(_syncQueue, ^{        _someString = someString;    });}

Remember:

(1) The scheduling queue can be used for synchronization semantics, providing a better alternative for @ synchronized blocks and nslock objects.

(2) the combined use of synchronous and asynchronous scheduling can provide the same behavior as normal synchronization, but does not need to block asynchronous scheduling due to synchronous operations.

(3) Synchronous queues and barrier blocks can make synchronous operations more efficient.


Item 42: prefer GCD to your mselector and friends

Some knowledge points:

(1) Use dispatch_after

    [self performSelector:@selector(doSomething) withObject:nil afterDelay:5.0];        double delayInSeconds = 5.0;    dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delayInSeconds * NSEC_PER_SEC));    dispatch_after(popTime, dispatch_get_main_queue(), ^(void){        [self doSomething];    });

(2) Use dispatch_async

    [self performSelectorOnMainThread:@selector(doSomething) withObject:nil waitUntilDone:NO];        dispatch_async(dispatch_get_main_queue(), ^{        [self doSomething];    });

(3) Use dispatch_sync

    [self performSelectorOnMainThread:@selector(doSomething) withObject:nil waitUntilDone:YES];        dispatch_sync(dispatch_get_main_queue(), ^{        [self doSomething];    });

Remember:

(1) The Memory mselector series methods are potentially dangerous if memory management is taken into account. If there is no way to determine which method will be selected, arc cannot determine where to insert the appropriate memory management method.

(2) The performselector series methods also limit the returned data type and the number of input method parameters.

(3) For the performselector functions that can select different threads to execute functions, it is best to use the corresponding GCD method to replace.


Item 43: Know when to use GCD and when to use operation queues

Remember:

(1) scheduling queues are not the only solution for multithreading and task management.

(2) Compared with GCD, the Operation queue provides more advanced objective-C interfaces. In addition, the Operation queue can provide more complex operations. Compared with GCD, you need to add some functional code to implement these complex operations.


Item 44: Use dispatch groups to take advantage of platform Scaling

Some knowledge points:

// Add the queue task to the asynchronous execution group dispatch_group_async // wait for a group of asynchronous execution tasks to be completed dispatch_group_wait // when a group of asynchronous tasks are completed dispatch_group_notify // execute dispatch_apply iteratively

Remember:

(1) A scheduling group is mainly used to manage a group of tasks. You can send a notification after a group of tasks are executed.

(2) The scheduling group can execute multiple concurrent tasks at the same time through the concurrent queue. In this case, gcd schedules multiple tasks simultaneously based on system resources. If you write it yourself, you need a lot of code.


Item 45: Use dispatch_once for thread-safe single-Time Code Execution

Some knowledge points:

// `dispatch_once' singleton initialisation+ (id)sharedInstance {    static EOCClass *sharedInstance = nil;    static dispatch_once_t onceToken;    dispatch_once(&onceToken, ^{        sharedInstance = [[self alloc] init];    });    return sharedInstance;}

Remember:

(1) thread-safe Singleton code execution is very common. GCD uses dispatch_once to make the process easier to use.

(2) The oncetoken parameter must be declared as a static or global range, so that the same parameter can be transmitted to each execution block that needs to be executed only once.


Item 46: Avoid dispatch_get_current_queue

Some knowledge points:

// Queue specific data exampledispatch_queue_t queueA = dispatch_queue_create("com.effectiveobjectivec.queueA", NULL);dispatch_queue_t queueB = dispatch_queue_create("com.effectiveobjectivec.queueB", NULL);dispatch_set_target_queue(queueB, queueA);static int kQueueSpecific;CFStringRef queueSpecificValue = CFSTR("queueA");dispatch_queue_set_specific(queueA,                             &kQueueSpecific,                             (void*)queueSpecificValue,                             (dispatch_function_t)CFRelease);dispatch_sync(queueB, ^{    dispatch_block_t block = ^{ NSLog(@"No deadlock!"); };        CFStringRef retrievedValue = dispatch_get_specific(&kQueueSpecific);    if (retrievedValue) {        block();    } else {        dispatch_sync(queueA, block);    }});

Remember:

(1) The dispatch_get_current_queue function is not as useful as you would expect. This method has been deprecated and is only used for debugging.

(2) scheduling queue is an inherited organizational structure. Therefore, the current queue cannot be simply described using a separate queue object.

(3) Specific queue data can be used to solve general problems caused by dispatch_get_current_queue, mainly to prevent non-reentrant code deadlocks.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.