IOS multithreading-GCD, iosgcd

Source: Internet
Author: User

IOS multithreading-GCD, iosgcd

What is GCD?

GCD is Apple's new abstract C-layer-based API for multi-threaded programming. Combined with Block, GCD simplifies multi-threaded operations, making thread operations more secure and efficient.

Before the emergence of GCD, the Cocoa framework provided the NSObject class

PerformSelectorInBackground: withObject

Performselecdomainmainthread

Methods to simplify the multi-threaded programming technology.

GCD can solve the following problems frequently encountered in multi-threaded programming:
1. Data competition (for example, updating a memory address at the same time)

2. deadlock (mutual wait)

3. Too many threads consume a large amount of memory

In iOS, if you put operations that require a lot of time on the main thread, it will impede the execution of the Main Loop called RunLoop in the main thread, as a result, the user interface cannot be updated, and the image of the application remains stuck for a long time.

 

Dispatch Queue

Dispatch Queue is an abstract Queue (FIFO) of tasks in GCD.

There are two types of queue,

SERIAL_DISPATCH_QUEUE waiting for processing to be completed in execution

CONCURRENT_DISPATCH_QUEUE does not wait for the processing to end in the current execution

In other words, SERIAL_DISPATCH_QUEUE is serial, and CONCURRENT_DISPATCH_QUEUE is parallel.

Specifically, the SERIAL_DISPATCH_QUEUE will only be created in one thread to process the task sequence, while the CONCURRENT_DISPATCH_QUEUE will be created in multiple threads, however, the specific number of operating systems to be created is determined by the resource.

Therefore, the Code processed in SERIAL_DISPATCH_QUEUE is ordered, while CONCURRENT_DISPATCH_QUEUE is unordered, but it is more efficient.

 

API

Dispatch_queue_create

Used to create a task to execute the queue

Parameter List

The name of the const char * label queue, which is the unique identifier of the queue. The name will be displayed directly as the DispatchQueue name in the debugger of Xcode and Instruments.

Dispatch_queue_attr_t sets the queue type, that is, ConcurrentQueue or SerialQueue. If the value is NULL, the default value is SerialQueue.

Return Value

Dispatch_queue_t variable

Here we will talk about the two systems provided by main_dispatch_queue and global_dispatch_queue,

Main_queue passed

Dispatch_get_main_queue ()

Global_queue passed

Dispatch_get_global_queue (). The global level is divided

HIGH, DEFAULT, LOW, and BACKGROUND

 

Dispatch_async

Add block operations to the specified queue for asynchronous execution. This shields the Implementation Details of multithreading and automatically generates thread execution for us.

 

Dispatch_after

Similar to the latency function, you can specify a queue for latency operations.

Dispatch_time_t time = dispatch_time (DISPATCH_TIME_NOW, 3ull * NSEC_PER_SEC); dispatch_after (time, hour (), ^ {NSLog (@ "wait 3 seconds ");});

 

Dispatch_group_policy

For the execution of the listener queue, callback can be performed after all tasks are completed.

    dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);    dispatch_group_t group = dispatch_group_create();    dispatch_group_async(group, queue, ^{        NSLog(@"1");    });    dispatch_group_async(group, queue, ^{        NSLog(@"2");    });    dispatch_group_async(group, queue, ^{        NSLog(@"3");    });    dispatch_group_notify(group, queue, ^{        NSLog(@"finish");    });

For a series of blocks executed in the same queue, if the serialQueue is sequential, you can end the operation in the last task. However, concurrentQueue is parallel. If you want to listen for the end operation, you must use this method.

Dispatch_group_wait is similar to running y, except that the wait time can be set in the wait method. If all the queue operations have not been completed yet, the operations will continue, but you can still set it to forever, which will play the same role as running y.

 

Dispatch_barrier_async

This operation is mainly used to prevent resource competition. In concurrentQueue, all unordered blocks are performed at the same time according to the number of created threads. If two write operations exist in concurrentQueue and both are read operations, resource competition will occur between the two write operations, and the read operation will read dirty data. Therefore, the dispatch_barrier_async method is required to prevent resource competition for blocks that cannot be parallel with other operations in concurrentQueue.

 

Dispatch_sync

Different from dispatch_async, dispatch_sync is used for synchronization between threads. For example, if thread A needs to do one thing after thread B, then it needs to use dispatch_sync.

In addition, you cannot Synchronize yourself in an execution thread, which may cause a thread deadlock, for example

Required queue1 = queue (); dispatch_sync (queue1, ^ {NSLog (@ "Synchronize main queue operations in main queue") ;}); dispatch_queue_t queue = dispatch_queue_create ("com. queue. www ", NULL); dispatch_async (queue, ^ {dispatch_sync (queue, ^ {NSLog (@ "Synchronize serial queue operations in new serial queue ");});});

Therefore, do not Synchronize yourself when using serial queue.

 

Dispatch_apply

The dispatch_apply function is an association function between dispatch_sync and dispatch group. It appends a specified Block to the specified Dispatch Queue with a specified number of times and waits until all processing ends. For example

    dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);    dispatch_apply(10, queue, ^(size_t index) {        NSLog(@"%ld",index);    });    NSLog(@"apply finish");2015-08-02 09:38:18.296 Dispatch[7388:2035125] 42015-08-02 09:38:18.296 Dispatch[7388:2035244] 22015-08-02 09:38:18.296 Dispatch[7388:2035241] 12015-08-02 09:38:18.296 Dispatch[7388:2035259] 62015-08-02 09:38:18.296 Dispatch[7388:2035243] 02015-08-02 09:38:18.296 Dispatch[7388:2035257] 32015-08-02 09:38:18.296 Dispatch[7388:2035258] 52015-08-02 09:38:18.296 Dispatch[7388:2035260] 72015-08-02 09:38:18.296 Dispatch[7388:2035125] 82015-08-02 09:38:18.296 Dispatch[7388:2035244] 92015-08-02 09:38:18.296 Dispatch[7388:2035125] apply finish

In fact, it can be seen that this function allows the main thread and queue to perform synchronization operations, and will not continue until all the threads in the queue have completed the execution.

 

Dispatch_semaphore

During data processing, dispatch_barrier_async can avoid such problems, but sometimes requires more detailed operations.

For example, to add 10000 objects to the array, use concurrentQueue to add them. We know that concurrentQueue will generate multiple threads, and multiple threads may access the array together, which is prone to problems. We need to control only one thread Operation Array at a time, as shown below:

    dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);    dispatch_semaphore_t semaphore = dispatch_semaphore_create(1);        NSMutableArray *array = [NSMutableArray new];    for (int i =  0 ; i < 10000 ; i++)    {        dispatch_async(queue, ^{            dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);            [array addObject:[NSNumber numberWithInt:i]];            NSLog(@"add %d",i);            dispatch_semaphore_signal(semaphore);        });    }

Here we will briefly describe the semaphore, that is, the second parameter for creating dispatch_semaphore. Specify a semaphore. When the semaphore is greater than 0, all threads are accessible. Once there is a ready-made access semaphore, it will be reduced by 1. If the semaphore is 0, it will enter the waiting state, knowing that the dispatch_semaphore_signal function calls to restore the semaphore. Therefore, it can be understood that there are several semaphores that can be accessed concurrently by several threads.

For example, if two threads add data and delete data at the same time, two semaphore variables are required for multi-thread collaboration.

dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);    dispatch_semaphore_t semaphoreAdd = dispatch_semaphore_create(1);    dispatch_semaphore_t semaphoreRemove = dispatch_semaphore_create(0);    NSMutableArray *array = [NSMutableArray new];    for (int i =  0 ; i < 2 ; i++)    {        dispatch_async(queue, ^{            dispatch_semaphore_wait(semaphoreAdd, DISPATCH_TIME_FOREVER);            [array addObject:[NSNumber numberWithInt:i]];            NSLog(@"add %lu",[array count]);            dispatch_semaphore_signal(semaphoreRemove);        });        dispatch_async(queue, ^{            dispatch_semaphore_wait(semaphoreRemove, DISPATCH_TIME_FOREVER);            [array removeObject:[NSNumber numberWithInt:i]];            NSLog(@"add %lu",[array count]);            dispatch_semaphore_signal(semaphoreAdd);        });    }

 

 

Dispatch_once

Dispatch_once is used to mark an operation and only executes it once. This method is generally used in the production singleton object. If you do not use dispatch_once to create a Singleton, it is not safe and requires locking. However, dispatch_once can solve this problem well.

+(instancetype)sharedInstance{    static CustomObject *obj;    static dispatch_once_t once;    dispatch_once(&once, ^{        obj = [[CustomObject alloc] init];    });    return obj;}

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.