GCD Internal implementation

Source: Internet
Author: User

Dispatch Queue

The Dispatch Queue should be very familiar to our developers and use a lot of scenarios, but how does his internal implementation work?

    • FIFO queue for managing the C-language layer implementation of the Append block
    • Lightweight signal for exclusive control implemented in the atomic function
    • Some containers for C-language layer implementations for managing threads

It is not difficult to imagine that the implementation of GCD requires the use of these tools, but if only this content can be implemented, then the kernel-level implementation is not required. (In fact, in a generic Linux kernel, gcd that are ported to Linux operating systems may be used).

Even some people will think that as long as the effort to write the thread management code, it will not be used to GCD, is this it?

Let's review Apple's official notes first:

Typically, the code for thread management written in an application is implemented at the system level.

In fact, as the phrase says, it is implemented at the system level, the core xnu of iOS and OS X, so no matter how hard programmers try to write code that manages threads, it is not possible to outperform the GCD implemented at the XNU kernel level in terms of performance.

Using GCD is better than using these generic multithreaded programming APIs like Pthreads and Nsthread. And, if you don't have to write a similar source code (called a fixed-source snippet) that's recurring to the operation thread, it's really a lot of benefit to be able to centralize the content in the GCD. We try to use GCD as much as possible or use APIs such as the Nsoperationqueue class GCD the cocoa framework.

Then first confirm the software components used to implement the dispatch Queue. As shown in the table:

The GCD API used by programmers is all contained in the C language function of the Libdispatch library. The Dispatch queue is implemented as a FIFO queue through structs and linked lists. FIFO queues are primarily responsible for managing a series of blocks that are appended by functions such as Dispatch_async. So we can understand that once we add a set of blocks from top to bottom in the program, the internal append process is a FIFO principle, excluding dispatch_after.

However, the block itself is not added directly to the FIFO queue, but instead joins the dispatch continuation in the dispatch_continuation_t type struct before entering the FIFO queue. The structure is used to memorize the dispatch group and other information that the block belongs to, which is equivalent to the commonly used execution context (execution context).

The Dispatch queue can be set by the Dipatch_set_target_queue function to target the Dispatch queue that performs the Dispatch queue processing. The target can be like a bead, setting multiple Dispatch queues connected together, but at the end of the connection string must be set to main Dispatch queue, or various priority global Dispatch queue, or ready for serial Dispatch queue's various priority global Dispatch queue.

Main Dispatch Queue executes block in Runloop. This is not a refreshing new technology.

The Global Dispatch queue has the following 8 types:

Global Dispatch Queue (High priority)
Global Dispatch Queue (Default priority)
Global Dispatch Queue (Low priority)
Global Dispatch Queue (Background priority)
Global Dispatch Queue (High overcommit priority)
Global Dispatch Queue (Default overcommit priority)
Global Dispatch Queue (Low overcommit priority)
Global Dispatch Queue (Background overcommit priority)

Note the difference between the previous four and four different priority queue: Overcommit. The difference is that the Overcommit queue forces the thread queue to be generated regardless of the state of the system.

Each of the 8 global Dispatch Queue uses a pthread_workqueue. When GCD is initialized, the Pthread_wrokqueue is generated using the PTHREAD_WORKQUEUE_CREATE_NP function.

Pthread_wrokqueue is included in the Pthreads API provided by LIBC. It is called by the system's Bsdthread_register and Workq_open functions to obtain its information after initializing the workqueue of the XNU kernel.

The XNU core has 4 types of workqueue:

Workqueue_high_prioqueue
Workqueue_default_prioqueue
Workqueue_low_prioqueue
Workqueue_bg_prioqueue

The above is a workqueue of 4 execution priorities. The execution priority is the same as the 4 execution priorities of the global Dispatch queue.

Here's a look at the process of executing block in the dispatch queue. When the block is executed in the global Dispatch queue, Libdispatch Dispatch continuation from the FIFO queue of the global Dispatch queue itself, calling Pthread_ The WORKQUEUE_ADDITEM_NP function. Pass the global Dispatch Queue itself, the workqueue information that conforms to its priority, and the callback function that executes Dispatch continuation, to parameters.

The PTHREAD_WORKQUEUE_ADDITEM_NP function uses Workq_kernreturn system calls to notify Workqueue to increase the items that should be executed. Based on this notification, the XNU kernel determines whether to generate a thread based on the system state. If the global Dispatch Queue is the overcommit priority, Workqueue always generates the thread.

Although this thread is roughly the same as the threads commonly used in iOS and OS X, there are a subset of the Pthread APIs that are not available. For more information, refer to the "Compatibility with POSIX Threads" section of Apple's official documentation, "Concurrency Programming Guide".

In addition, because the thread generated by workqueue runs in the thread Schedule table that implements the Workqueue, his context switch (shift context) differs greatly from the normal thread. That's why we use GCD.

The Workqueue thread executes the Pthread_workqueue function, which calls the Libdispatch callback function. The next block added to the global Dispatch queue is executed in the callback function.

The above is the approximate process of dispatch queue execution.

It is not possible to realize the performance of native GCD in thread management code written by programmers themselves.

Original link: http://blog.csdn.net/mobanchengshuang/article/details/10839049

GCD Internal implementation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.