iOS multithreaded--GCD article

Source: Internet
Author: User

What is GCD

GCD is an apple of the multi-threaded programming of a new set of abstractions based on the C language layer of API, combined with block simplifies the multi-threaded operation, so that we can be more safe and efficient threading operations.

Before the advent of GCD, the cocoa framework provided the NSObject class

Performselectorinbackground:withobject

Performselectoronmainthread

method to simplify multithreaded programming techniques.

GCD can address the frequently occurring problems in the following multithreaded programming:
1. Data competition (e.g. updating a memory address at the same time)

2. Deadlock (wait for each other)

3. Too many threads cause a lot of memory consumption

In iOS, if you put a lot of time on the main thread, it will prevent the main thread is called the runloop of the execution of the primary loop, resulting in the inability to update the user interface, the application's screen for a long time stagnation and so on.

Dispatch Queue

The Dispatch queue is a GCD that performs processing of the abstract queues (FIFO) for tasks.

The queue is divided into two types,

Serial_dispatch_queue wait for execution to finish in process now

Concurrent_dispatch_queue does not wait for execution to finish in process now

In other words, Serial_dispatch_queue is serial, concurrent_dispatch_queue is parallel.

specifically to the thread, Serial_dispatch_queue will only create a thread to process the task sequence, while Concurrent_dispatch_queue will be created on multiple threads, but the number of specific creation is a running operating system based on the resources.

So the code processed in Serial_dispatch_queue is ordered, and the concurrent_dispatch_queue is unordered, but relatively more efficient.

Api

Dispatch_queue_create

Used to create a task to execute a queue

Parameter list

The name of the const char *label queue, as the only indicator of the queue, will be displayed directly as Dispatchqueue name in the debugger of Xcode and instruments

dispatch_queue_attr_t sets the type of queue, that is, concurrentqueue or serialqueue,null, the default is Serialqueue

return value

dispatch_queue_t variable

Here's a look at the two systems of Main_dispatch_queue and Global_dispatch_queue,

Main_queue through

Dispatch_get_main_queue ()

Global_queue through

Dispatch_get_global_queue (), the global grade is divided into

Four kinds of high, DEFAULT, low, background

Dispatch_async

Adds a block operation to the specified queue, executes asynchronously, masks the implementation details of multithreading, and automatically generates threads for us to execute.

Dispatch_after

Similar to the delay function, you can specify a queue for deferred operation

    dispatch_time_t time = Dispatch_time (Dispatch_time_now, 3ull * nsec_per_sec);    Dispatch_after (Time, Dispatch_get_main_queue (), ^{        NSLog (@ "Wait 3 seconds");    });

Dispatch_group_notify

For monitoring queue execution, callbacks can be performed when all tasks are completed

    dispatch_queue_t queue = Dispatch_get_global_queue (Dispatch_queue_priority_default, 0);    dispatch_group_t group = Dispatch_group_create ();    Dispatch_group_async (group, queue, ^{        NSLog (@ "1");    });    Dispatch_group_async (group, queue, ^{        NSLog (@ "2");    });    Dispatch_group_async (group, queue, ^{        NSLog (@ "3");    });    Dispatch_group_notify (group, queue, ^{        NSLog (@ "Finish");    });

For a series of blocks to be executed in the same queue, if the serialqueue is sequential, the end operation can be processed at the last task. But for the concurrentqueue is parallel, if you want to listen to the end operation, it is necessary to use this method.

Dispatch_group_wait and notify are similar, except that the wait method can set the waiting time. If the time has not ended all of the queue's operations, then will continue to proceed, but still can be set to forever has been waiting, so that the Notify play the same role.

Dispatch_barrier_async

This is done primarily to prevent resource contention. In Concurrentqueue, all blocks are unordered at the same time as the number of threads created. If there are two writes in the Concurrentqueue, and he is a read operation, there is a resource contention between the two writes, and the read operation reads the dirty data. Therefore, blocks that cannot be paralleled with other operations in Concurrentqueue need to use the Dispatch_barrier_async method to prevent resource contention.

Dispatch_sync

Unlike Dispatch_async, Dispatch_sync is used for synchronization between threads, such as a thread to do one thing must be placed in the B thread later, then need to use the Dispatch_sync.

In addition, it is not possible to synchronize itself in one thread of execution, which can cause thread deadlock, for example

    dispatch_queue_t queue1 = Dispatch_get_main_queue ();    Dispatch_sync (queue1, ^{        NSLog (@ "Main queue synchronous main queue operation");        dispatch_queue_t queue = dispatch_queue_create ("com.queue.www", NULL);    Dispatch_async (queue, ^{        dispatch_sync (queue, ^{            NSLog (@ "Synchronizing serial queue operations in the new serial queue");}        );    

So you must not sync yourself when using serial queue.

Dispatch_apply

The Dispatch_apply function is an association function of the Dispatch_sync and dispatch group that appends the specified block to the specified dispatch queue with a specified number of times, and waits for the end of all processing execution, such as

    dispatch_queue_t queue = Dispatch_get_global_queue (Dispatch_queue_priority_default, 0);    Dispatch_apply (n, queue, ^ (size_t index) {        NSLog (@ "%ld", index);    });    NSLog (@ "Apply Finish"), 2015-08-02 09:38:18.296 dispatch[7388:2035125] 42015-08-02 09:38:18.296 dispatch[7388:2035244 ] 22015-08-02 09:38:18.296 dispatch[7388:2035241] 12015-08-02 09:38:18.296 dispatch[7388:2035259] 62015-08-02 09:38:18.296 dispatch[7388:2035243] 02015-08-02 09:38:18.296 dispatch[7388:2035257] 32015-08-02 09:38:18.296 Dispatch [7388:2035258] 52015-08-02 09:38:18.296 dispatch[7388:2035260] 72015-08-02 09:38:18.296 Dispatch[7388:2035125] 82015-08-02 09:38:18.296 dispatch[7388:2035244] 92015-08-02 09:38:18.296 dispatch[7388:2035125] Apply finish

As you can actually see, this function synchronizes the main thread and the queue, and waits until all threads in the queue have finished executing.

Dispatch_semaphore

Dispatch_barrier_async can avoid such problems when it comes to data processing, but sometimes more granular operations are required.

For example, to add 10,000 objects to an array, add them with Concurrentqueue. We know that Concurrentqueue will generate multiple threads, and it is likely that multiple threads will have access to the array together, which is prone to problems. We need to control only one thread at a time to manipulate the array, as follows:

    dispatch_queue_t queue = Dispatch_get_global_queue (Dispatch_queue_priority_default, 0);    dispatch_semaphore_t semaphore = dispatch_semaphore_create (1);        Nsmutablearray *array = [Nsmutablearray new];    for (int i =  0; i < 10000; i++)    {        dispatch_async (queue, ^{            dispatch_semaphore_wait (semaphore, Dispat Ch_time_forever);            [Array Addobject:[nsnumber numberwithint:i];            NSLog (@ "Add%d", I);            Dispatch_semaphore_signal (semaphore);        });    

Here is a brief signal volume, which is the second parameter to create a dispatch_semaphore. Specifies a semaphore, then all threads are accessible when the semaphore is greater than 0. Once the available access semaphore is reduced by 1, if the semaphore is 0 it will enter the wait and know the Dispatch_semaphore_signal function call to re-restore the semaphore. So basically it can be understood that there are several semaphores that can have several threads concurrent access.

Another example is that there are now two threads, one to add data, one to delete data, then two signal volume to achieve collaboration between multiple threads

dispatch_queue_t queue = Dispatch_get_global_queue (Dispatch_queue_priority_default, 0);    dispatch_semaphore_t Semaphoreadd = dispatch_semaphore_create (1);    dispatch_semaphore_t semaphoreremove = dispatch_semaphore_create (0);    Nsmutablearray *array = [Nsmutablearray new];    for (int i =  0; i < 2; i++)    {        dispatch_async (queue, ^{            dispatch_semaphore_wait (Semaphoreadd, DISPATC H_time_forever);            [Array Addobject:[nsnumber numberwithint:i];            NSLog (@ "Add%lu", [array Count]);            Dispatch_semaphore_signal (Semaphoreremove);        });        Dispatch_async (queue, ^{            dispatch_semaphore_wait (Semaphoreremove, dispatch_time_forever);            [Array Removeobject:[nsnumber numberwithint:i];            NSLog (@ "Add%lu", [array Count]);            Dispatch_semaphore_signal (Semaphoreadd);        });    

Dispatch_once

Dispatch_once is used to mark an operation, which is performed only once, and is typically used in the production of Singleton objects. If it is not safe to create a singleton without dispatch_once, you need to lock it up, but dispatch_once can solve it well.

+ (instancetype) sharedinstance{    static customobject *obj;    static dispatch_once_t once;    Dispatch_once (&once, ^{        obj = [[CustomObject alloc] init];    });    return obj;}

iOS multithreaded--GCD article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.