Dispatch_semaphore_t semaphore (dispatch group and semaphore) in iOS multithreading .)

Source: Internet
Author: User

InIn Windows, thread synchronization control can be implemented through Critical Section, Mutex, Semaphore, Event, and other methods.

On the IOS platform, you can use dispatch_semaphore_t for synchronization when using GCD for simple multi-thread programming.

There is no corresponding Event on the IOS platform. For some scenarios that are suitable for the Event mode, you can use dispatch_semaphore_t to simulate the features of event (autoReset.


The dispatch source and RunLoop source are similar in concept and easier to use. A good understanding of the dispatch source is actually a special production and consumption mode. The dispatch source is like the data produced. When there is new data, it will automatically run the corresponding block on the queue specified by dispatch (that is, the consumption Queue, production and Consumption synchronization are automatically managed by the dispatch source.

To use a dispatch source, follow these steps:

1. dispatch_source_t source = dispatch_source_create (dispatch_source_type, handler, mask, dispatch_queue);// Create a dispatch source. Here we use addition to merge the dispatch source data. The last parameter is to specify the dispatch queue.

2. dispatch_source_set_event_handler (source, ^ {// set the block to respond to the dispatch source event and run it on the queue specified by the dispatch source

// You can use dispatch_source_get_data (source) to obtain the dispatch source data.

});

3. dispatch_resume(Source );// The dispatch source is in the suspend state after being created, so you need to start the dispatch Source

4. dispatch_source_merge_data(Source, value );// Merge the dispatch source data. In the dispatch source block, dispatch_source_get_data (source) will get the value.

Is it easy? You do not need to write the synchronization code. For example, you can write data in the network request mode as follows:

Dispatch_source_t source = dispatch_source_create (DISPATCH_SOURCE_TYPE_DATA_ADD, 0, 0, dispatch_get_global_queue (0, 0 ));

Dispatch_source_set_event_handler (source, ^ {

Dispatch_sync (dispatch_get_main_queue (), ^ {

// Update the UI

});

});

Dispatch_resume (source );

Dispatch_async (dispatch_get_global_queue (0, 0), ^ {

// Network request

Dispatch_source_merge_data (source, 1); // notification queue

});

The dispatch source also supports other system sources, including timers, read/write monitoring files, monitoring file systems, monitoring signals, or processes. Basically, the calling method is the same as the preceding method, it may be that the system automatically triggers the event. For example, the dispatch Timer:

Dispatch_source_t timer = dispatch_source_create (DISPATCH_SOURCE_TYPE_TIMER, 0, 0, queue );

Dispatch_source_set_timer (timer, dispatch_walltime (NULL, 0), 10 * NSEC_PER_SEC, 1 * NSEC_PER_SEC); // trigger timer every 10 seconds, with an error of 1 second

Dispatch_source_set_event_handler (timer, ^ {

// Timed Processing

});

Dispatch_resume (timer );

Other circumstances of the dispatch source will not be one by one for example, see the official website has a specific document: https://developer.apple.com/library/ios/documentation/General/Conceptual/ConcurrencyProgrammingGuide/GCDWorkQueues/GCDWorkQueues.html#//apple_ref/doc/uid/TP40008091-CH103-SW1

Finally, some other functions of the dispatch source are listed as follows:

Uintptr_t dispatch_source_get_handle (dispatch_source_t source );// Obtain the second parameter of dispatch_source_create.

Unsignedlong dispatch_source_get_mask (dispatch_source_t source );// Obtain the third parameter of dispatch_source_create.

Void dispatch_source_cancel (dispatch_source_t source );// Cancel the event processing of the dispatch source-that is, block is no longer called. If you call dispatch_suspend, only the dispatch source is paused.

Long dispatch_source_testcancel (dispatch_source_t source );// Check whether the dispatch source is canceled. If a non-0 value is returned, it indicates that the dispatch source has been canceled.

Void dispatch_source_set_cancel_handler (dispatch_source_t source, dispatch_block_t cancel_handler );// The block called when the dispatch source is canceled. It is generally used to close files or sockets and release related resources.

Void dispatch_source_set_registration_handler (dispatch_source_t source, dispatch_block_t registration_handler );// You can call a block when the dispatch source is started. After the call is completed, the block is released. You can also call this function at any time during the dispatch source operation.


A Preliminary Study on iOS multithreading (8) -- dispatch queue

The core of GCD programming is the dispatch queue. The execution of the dispatch block will eventually be put into a queue. It is similar to NSOperationQueue, but more complex and powerful, and can be nested. Therefore, combined with the GCD implemented by block, the function Closure (Closure) features are fully realized.

The following methods can be used to generate a dispatch queue:

1. dispatch_queue_tQueue = dispatch_queue_create ("com. dispatch. serial", DISPATCH_QUEUE_SERIAL);// Generate a serial queue. The blocks in the queue are executed in the first-in-first-out (FIFO) order. They are actually executed in a single thread. The first parameter is the queue name, which is useful when debugging a program. Do not name it again.

2. dispatch_queue_t queue = dispatch_queue_create ("com. dispatch. concurrent", DISPATCH_QUEUE_CONCURRENT);// Generate a concurrent execution queue, and the block is distributed to multiple threads for execution.

3. dispatch_queue_t queue = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0 );// Obtain the default concurrency queue generated by the program process. You can set the priority to select the high, medium, and low priority queues. Because it is generated by default by the system, dispatch_resume () and dispatch_suspend () cannot be called to control execution continuation or interruption. Note that three queues do not represent three threads, and more threads may exist. The concurrent queue can automatically generate a reasonable number of threads according to the actual situation. It can also be understood that the dispatch queue manages a thread pool and is transparent to the program logic.

The official website explains that there are three concurrent queues, But there is actually a lower-priority queue, set the priorityDISPATCH_QUEUE_PRIORITY_BACKGROUND. During Xcode debugging, you can observe the various dispatch queues in use.

4. dispatch_queue_t queue =Dispatch_get_main_queue ();// Obtain the dispatch queue of the main thread, which is actually a serial queue. Similarly, the execution of the dispatch queue of the main thread cannot be controlled.

Next we can use the dispatch_async or dispatch_sync function to load the block to be run.

Dispatch_async (queue, ^ {

// Block specific code

}); // Asynchronously execute the block and the function returns immediately

Dispatch_sync (queue, ^ {

// Block specific code

}); // Execute the block synchronously. If the function is not returned, wait until the block is executed. The compiler will optimize the code according to the actual situation, so sometimes you will find that the block is still executed on the current thread, and no new thread is generated.

The actual programming experience tells us to avoid dispatch_sync as much as possible, and it is easy to cause program deadlocks when using nesting.

If queue1 is a serial queue, this code immediately produces a deadlock:

Dispatch_sync (queue1, ^ {

Dispatch_sync (queue1, ^ {

......

});

......

});

Consider why the following code must be deadlocked:

Dispatch_sync (Dispatch_get_main_queue (),^ {

......

});

In practice, dispatch can be used to write data. The following is a common multi-thread execution model for network request data:

Dispatch_async (dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^ {

// Start network request data in the Child thread

// Update the data model

Dispatch_sync (dispatch_get_main_queue (), ^ {

// Update the UI code in the main thread

});

});

The background running of the program and the UI update code are compact, and the code logic is clear at a glance.

The dispatch queue is thread-safe and can be used to implement the lock function. For example, to write data to the same database with multiple threads, you must maintain the write sequence and integrity of each write. You can simply use the serial queue to implement the following:

Dispatch_queue_t queue1 = dispatch_queue_create ("com. dispatch. writedb", DISPATCH_QUEUE_SERIAL );

-(Void) writeDB :( NSData *) data

{

Dispatch_async (queue1, ^ {

// Write database

});

}

The next call to writeDB must wait until the last call is completed to ensure that the writeDB method is thread-safe.

The dispatch queue also implements other common functions, including:

Void dispatch_apply (size_t iterations, dispatch_queue_t queue, void (^ block) (size_t ));// Execute the block repeatedly. Note that this method returns synchronously, that is, it will not be returned until all blocks are executed. If asynchronous return is required, it will be nested in dispatch_async. Whether multiple blocks run concurrently or serially depends on whether the queue is concurrent or serial.

Void dispatch_barrier_async (dispatch_queue_t queue, dispatch_block_t block );// This function can be used to set the block for Synchronous execution. It will not start execution until the block before it is added to the queue. After the block is added to the queue, it is executed only after the block is executed.

Void dispatch_barrier_sync (dispatch_queue_t queue, dispatch_block_t block );// Same as above, except that it is a synchronous return Function

Void dispatch_after (dispatch_time_t when, dispatch_queue_t queue, dispatch_block_tBlock );// Delayed execution block

Finally, let's take a look at a very distinctive function of the dispatch queue:

Void dispatch_set_target_queue (dispatch_object_t object, dispatch_queue_t queue );

It specifies the task object to be executed to different queues for processing. This task object can be a dispatch queue or a dispatch source (which will be introduced in the blog post later ). In addition, this process can be dynamic, and the queue can be dynamically scheduled and managed. For example, there are two queues: dispatchA and dispatchB. At this time, dispatchA is assigned to dispatchB:

Dispatch_set_target_queue (dispatchA, dispatchB );

Then, blocks that are not running on dispatchA will run on dispatchB. If dispatchA is paused:

Dispatch_suspend (dispatchA );

The execution of the original block on dispatchA will be suspended, and the dispatchB block will not be affected. If dispatchB is paused, dispatchA is paused.

Here is a simple example to illustrate the flexibility of the dispatch queue operation. In practical applications, you will gradually discover its potential.

The dispatch queue does not support cancel (cancel) and does not implement the dispatch_cancel () function. Unlike NSOperationQueue, this is a small defect.

A Preliminary Study on iOS multithreading (10) -- dispatch Synchronization


GCD supports dispatch queue synchronization in two ways: dispatch group and semaphore.

I. dispatch group (Dispatch group)

1. Create a dispatch Group

Dispatch_group_t group = dispatch_group_create ();

2. Start the block in the dispatch queue and associate it with the group.

Dispatch_group_async (group, queue, ^ {

//...

});

3. Wait until the block associated with the group is executed. You can also set the timeout parameter.

Dispatch_group_wait (group, DISPATCH_TIME_FOREVER );

4. Set a notification block for the group. After the block associated with the group is executed, call this block. Similar to dispatch_barrier_async.

Dispatch_group_policy (group, queue, ^ {

//...

});

5. Manually manage the running status (or count) of the block associated with the group. The number of times the group enters and exits must match

Dispatch_group_enter (group );

Dispatch_group_leave (group );

Therefore, the following two calls are actually equivalent,

A)

Dispatch_group_async (group, queue, ^ {

//...

});

B)

Dispatch_group_enter (group );

Dispatch_async (queue, ^ {

//...

Dispatch_group_leave (group );

});

Therefore, you can use dispatch_group_enter, dispatch_group_leave, and dispatch_group_wait to synchronize data. For example: commit.

Ii. dispatch semaphores (Dispatch semaphore)

1. Create a semaphore. You can set the number of resources of the semaphore. 0 indicates there is no resource. Calling dispatch_semaphore_wait will wait immediately.

Dispatch_semaphore_t semaphore = dispatch_semaphore_create (0 );

2. waiting signal. You can set the timeout parameter. If this function returns 0, the notification is returned. If it is not 0, the timeout is returned.

Dispatch_semaphore_wait (semaphore, DISPATCH_TIME_FOREVER );

3. Notification signal. If the waiting thread is woken up, non-0 is returned; otherwise, 0 is returned.

Dispatch_semaphore_signal (semaphore );

Finally, let's return to the example of generating a consumer. How can we achieve synchronization using the dispatch semaphore:

Dispatch_semaphore_t sem = dispatch_semaphore_create (0 );

Dispatch_async (dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^ {// consumer queue

While (condition ){

If (dispatch_semaphore_wait (sem, dispatch_time (DISPATCH_TIME_NOW, 10 * NSEC_PER_SEC) // wait for 10 seconds

Continue;

// Obtain data

}

});

Dispatch_async (dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^ {// producer queue

While (condition ){

If (! Dispatch_semaphore_signal (sem ))

{

Sleep (1); // wait for a while

Continue;

}

// Notification successful

}

});


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.