A detailed explanation of IO multiplexing mechanism

Source: Internet
Author: User
Tags readable

analysis of high performance IO model

Server-side programming often requires the construction of high-performance IO models, with four common IO models:

(1) Synchronous blocking IO (Blocking io): The traditional IO model.

(2) Synchronous non-blocking io (non-blocking IO): Socket created by default is blocked, non-blocking IO requires socket to be set to Nonblock. Note that NIO is not a Java NiO (New IO) library.

(3) IO multiplexing (IO multiplexing): The classic reactor design pattern, sometimes referred to as the selector in asynchronous blocking Io,java and Epoll in Linux is the model.

(4) Asynchronous IO (asynchronous IO): The classic Proactor design pattern, also known as asynchronous non-blocking io.


The concept of synchronous and asynchronous describes how user threads interact with the kernel: synchronization is when a user thread initiates an IO request, waits for it, or polls the kernel IO operation to complete, and the asynchronous means that the user thread initiates the IO request and continues execution. The user thread is notified when the kernel IO operation completes, or the callback function registered by the user thread is invoked.

The concept of blocking and non-blocking describes how the user thread invokes kernel IO operations: Blocking means that the IO operation needs to be completely completed before it is returned to the user space, rather than blocking, which means that the IO operation returns to the user immediately after the call to a state value without waiting for the IO operation to complete.


In addition, the Stevens io (Signal driven IO) model mentioned in "Unix Network Programming" Volume 1, which is not commonly used, is not covered in this article. Next, we analyze the implementation principles of the four common IO models in detail. For ease of description, we use the IO read operation as an example.


One, Synchronous blocking IO


The synchronous blocking IO model is the simplest IO model in which user threads are blocked when the kernel is IO-operated.

Figure 1 Synchronous blocking IO

As shown in Figure 1, the user thread initiates an IO read operation through system call read, which is transferred from user space to kernel space. The kernel waits for the packet to arrive, and then copies the received data to user space to complete the read operation.

The pseudo code of the user thread using the synchronous blocking IO model is described as:

{

Read (socket, buffer);

process (buffer);

}

That is, the user waits until read reads the data in the socket to the buffer before proceeding with the received data. During the entire IO request, the user thread is blocked, which causes the user to not do anything when initiating an IO request, and has insufficient utilization of the CPU resources.


Second, Synchronous non-blocking io


Synchronous non-blocking IO Sets the socket to nonblock on the basis of synchronous blocking IO. This allows the user thread to return immediately after an IO request has been initiated.

Figure 2 Synchronous Non-blocking IO

As shown in Figure 2, because the socket is non-blocking, the user thread returns immediately when it initiates an IO request. However, no data is read, and the user thread needs to constantly initiate IO requests until the data arrives and the data is actually read to continue.

The pseudo code of the user thread using the synchronous non-blocking IO model is described as:

{

while (read (socket, buffer)!= SUCCESS)

;

process (buffer);

}

That is, the user needs to constantly call read, trying to read the data in the socket until the read succeeds before proceeding with the received data. Throughout the IO request process, although the user thread can return immediately after each IO request, a large amount of CPU resources are consumed in order to wait for data to continue to poll and repeat requests. This model is rarely used directly, but the non-blocking IO feature is used in other IO models.


Third, io multiplexing

The IO multiplexing model is built on the basis of the multi-channel separation function Select provided by the kernel, and using the Select function avoids the problem of polling waiting in the synchronized non-blocking IO model.

Fig. 3 Multi-channel separation function Select

As shown in Figure 3, the user first adds the socket that needs IO operations to the Select, and then blocks the wait for the select system call to return. When the data arrives, the socket is activated and the Select function returns. The user thread officially initiates the read request, reads the data, and continues execution.

From the process point of view, using the Select function for IO request and synchronization blocking model is not much different, and even more to add a monitoring socket, and call the Select function of the additional operations, more efficient. However, the biggest advantage after using select is that the user can process IO requests for multiple sockets simultaneously in one line agenda. Users can register multiple sockets, and then continuously invoke select to read the activated socket to achieve the same line agenda multiple IO requests . However, in the synchronous blocking model, it is necessary to achieve this goal by means of multithreading.

The user thread uses the pseudo code of the Select function to describe:

{

Select (socket);

while (1) {

sockets = select ();

For (socket in sockets) {

if (Can_read (socket)) {

Read (socket, buffer);

process (buffer);

}

}

}

}

Where the while loop adds the socket to the Select monitor, and then calls the select in the while to get the active socket, and once the socket is readable, the Read function is invoked to read the data from the socket.


However, the advantages of using the Select function are not limited to this. Although the above approach allows multiple IO requests to be processed in a single thread, the process of each IO request is blocked (blocking on the Select function), and the average time is even longer than the synchronous blocking IO model. If the user thread registers only the socket or IO request of interest and then does its own thing, it can improve CPU utilization when the data arrives.

The IO multiplexing model implements this mechanism using the reactor design pattern.

Figure 4 Reactor design pattern

As shown in Figure 4, the EventHandler abstract class represents the Io event handler, which owns the IO file handle handle (obtained by Get_handle), and handle_event (read/write, and so on) for handle operations. Subclasses that inherit from EventHandler can customize the behavior of event handlers. The reactor class is used to manage EventHandler (registration, deletion, etc.) and to implement the event loop using Handle_events to continuously invoke the multiple-function select of the synchronous event multiplexing (typically the kernel), as long as a file handle is activated (readable/writable, etc.) Select returns (blocks), and Handle_events invokes the handle_event of the event handler associated with the file handle.

Figure 5 IO Multiplexing

As shown in Figure 5, the work of the user thread polling IO operation status can be uniformly delegated to the Handle_events event loop through reactor. After the user thread registers the event handler, it can continue doing other work (asynchronous), while the reactor thread is responsible for calling the kernel's Select function to check the socket state. When a socket is activated, the corresponding user thread (or the callback function that executes the user's thread) is notified to perform the work of handle_event data reading and processing. Because the Select function is blocked, the multiplex IO multiplexing model is also called an asynchronous blocking IO model. Notice that the blocking here refers to the thread being blocked when the Select function executes, not the socket. In general, when using the IO multiplexing model, the socket is set to Nonblock, but this does not have an impact, because when the user initiates an IO request, the data has arrived and the user thread must not be blocked.

The pseudo code description of the user thread using the IO multiplexing model is:

void Usereventhandler::handle_event () {

if (Can_read (socket)) {

Read (socket, buffer);

process (buffer);

}

}


{

Reactor.register (new Usereventhandler (socket));

}

Users need to rewrite the EventHandler handle_event function to read data, processing data work, the user thread only need to register their EventHandler to reactor. The pseudo code for the Handle_events event loop in reactor is roughly as follows.

Reactor::handle_events () {

while (1) {

sockets = select ();

For (socket in sockets) {

Get_event_handler (socket). Handle_event ();

}

}

}

The event loop constantly invokes select to get the active socket, and then the executor handle_event function, depending on the eventhandler that gets the socket.

IO multiplexing is the most commonly used IO model, but its degree of asynchrony is not "exhaustive" because it uses a select system call that blocks threads. So IO multiplexing can only be called asynchronous blocking IO, not real asynchronous IO.


Four, Asynchronous IO


The "real" asynchronous IO requires stronger support from the operating system. In the IO multiplexing model, the event loop notifies the user thread of the state event of the file handle, which reads the data and processes the data by the user thread. In the asynchronous IO model, when the user thread receives the notification, the data has been read by the kernel and placed in the buffer area specified by the user thread, and the kernel notifies the user that the thread can be used directly when the IO is complete.

The asynchronous IO model implements this mechanism using the proactor design pattern.

Figure 6 Proactor Design pattern

The figure 6,proactor pattern and the reactor pattern are structurally similar, but differ significantly in the way users (clients) are used. In reactor mode, the user thread listens to events that are of interest to the reactor object, and then invokes the event handler function when the event is triggered. In Proactor mode, the user thread will asynchronousoperation (read/write, etc.), Proactor and Completionhandler registered to Asynchronousoperationprocessor when the operation is complete. Asynchronousoperationprocessor uses the façade pattern to provide a set of asynchronous operational APIs (read/write, etc.) that users can use, and then continue to perform their tasks when the user thread invokes the asynchronous API. Asynchronousoperationprocessor will open a standalone kernel thread to perform an asynchronous operation to achieve true asynchrony. When the asynchronous IO operation completes, Asynchronousoperationprocessor takes out the Proactor and completionhandler that the user thread registers with the asynchronousoperation. The Completionhandler is then forwarded along with the result data of the IO operation to the Proactor,proactor, which is responsible for callback each asynchronous operation to complete the processing function handle_event. Although each asynchronous operation in Proactor mode can bind to a Proactor object, it is generally in the operating system that Proactor is implemented as singleton mode to facilitate centralized distribution operations to complete events.

Figure 7 Asynchronous IO

As shown in Figure 7, in the asynchronous IO model, the user thread initiates the read request directly using the asynchronous IO API provided by the kernel, and returns immediately after the launch, continuing with the user thread code. At this point, however, the user thread has registered the calling Asynchronousoperation and Completionhandler to the kernel, and the operating system opens a separate kernel thread to handle IO operations. When the read request's data arrives, the kernel is responsible for reading the data in the socket and writing to the user-specified buffer. Finally, the kernel Completionhandler the read data and the user thread registration to the internal Proactor,proactor to notify the user thread of the IO-completed information (typically by invoking the completion event handler registered by the user thread) to complete the asynchronous IO.

The pseudo code for the user thread using the asynchronous IO model is described as:

void Usercompletionhandler::handle_event (buffer) {

process (buffer);

}


{

Aio_read (socket, new Usercompletionhandler);

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.