Analysis of High-performance I/O models

Source: Internet
Author: User

Analysis of High-performance I/O models

Analysis of High-performance IO Models

 

High-performance I/O models are often required for Server programming. There are four common I/O models:

(1) Synchronous Blocking I/O (Blocking IO): the traditional I/O model.

(2) Non-blocking IO (Non-blocking IO): by default, the created socket is blocked. Non-blocking IO requires the socket to be set to NONBLOCK. Note that NIO is not a Java NIO (New IO) Library.

(3) IO Multiplexing: the classic Reactor design mode, also known as Asynchronous blocking IO. This model applies to Selector in Java and epoll in Linux.

(4) Asynchronous IO: the classic Proactor design mode, also known as Asynchronous non-blocking IO.

The concept of synchronization and Asynchronization describes the interaction between the user thread and the kernel: synchronization means that the user thread can continue to execute after initiating an IO request or polling the kernel IO operation; asynchronous means that the user thread continues to execute after initiating an IO request. After the kernel IO operation is completed, the user thread is notified or the callback function registered by the user thread is called.

The concept of blocking and non-blocking describes how the user thread calls the kernel I/O operation: Blocking means that IO operations need to be completely completed before they are returned to the user space; non-blocking means that an I/O operation is returned to the user immediately after it is called. You do not need to wait until the I/O operation is complete.

In addition, the Signal-Driven I/O (Signal Driven IO) model mentioned by Richard Stevens in volume 1 of Unix network programming is not mentioned in this article because this model is not commonly used. Next, we will analyze in detail the implementation principles of the four common IO models. For ease of description, I/O read operations are used as examples.

I. synchronous blocking IO

Synchronous blocking I/O model is the simplest I/O model. User threads are blocked when performing I/O operations on the kernel.

Figure 1 synchronous blocking IO

1. the user thread calls read to initiate an IO read operation, and the user space is transferred to the kernel space. After the data packet arrives, the kernel copies the received data to the user space to complete the read operation.

The pseudocode of the user thread using synchronous blocking IO model is described:

{

Read (socket, buffer );

Process (buffer );

}

That is, you need to wait for the read to read the data in the socket into the buffer before continuing to process the received data. During the entire IO request process, the user thread is blocked. As a result, the user cannot do anything when initiating the IO request, and the CPU resource utilization is insufficient.

Ii. Non-blocking I/O Synchronization

Synchronous non-blocking IO is based on synchronous blocking IO, And the socket is set to NONBLOCK. In this way, the user thread can return immediately after initiating an IO request.

Figure 2 synchronous non-blocking IO

As shown in 2, the socket is non-blocking, so the user thread returns immediately when initiating an IO request. However, no data is read, and the user thread must initiate IO requests continuously until the data arrives.

The pseudocode used by the user thread to synchronize non-blocking I/O models is described as follows:

{

While (read (socket, buffer )! = SUCCESS)

;

Process (buffer );

}

That is to say, You need to constantly call read and try to read the data in the socket until the read is successful, and then continue to process the received data. During the entire IO request process, although the user thread can return immediately after each IO request is initiated, in order to wait for the data, it still needs to round-robin and repeat the request continuously, it consumes a lot of CPU resources. This model is rarely used directly, but non-blocking IO is used in other IO models.

Iii. IO multiplexing

The IO multiplexing model is based on the select function provided by the kernel. Using the select function can avoid polling and waiting in the synchronous non-blocking IO model.

Figure 3 select

As shown in 3, the user first adds the socket that requires I/O operations to the select statement, and then blocks waiting for the return result of the select statement system call. When data arrives, the socket is activated, and the select function returns. The user thread formally initiates a read request, reads data, and continues execution.

From the process point of view, the IO request using the select function is not much different from the synchronous blocking model. It even adds the monitoring socket and the additional operations to call the select function, which is less efficient. However, the biggest advantage after using select is that users can simultaneously process multiple socket I/O requests in one thread. You can register multiple sockets and call select to read the activated socket to process multiple IO requests simultaneously in the same thread. In the synchronous blocking model, you must use multiple threads to achieve this goal.

The pseudocode used by the user thread to use the select function is described as follows:

{

Select (socket );

While (1 ){

Sockets = select ();

For (socket in sockets ){

If (can_read (socket )){

Read (socket, buffer );

Process (buffer );

}

}

}

}

Before the while LOOP, add the socket to select monitoring, and then call select to obtain the activated socket in the while loop. Once the socket is readable, the read function is called to read the data in the socket.

However, the advantages of using the select function are not limited to this. Although the preceding method allows a single thread to process multiple IO requests, the process of each IO request is still blocked (blocked on the select function ), the average time is even longer than the synchronous blocking I/O model. If the user thread registers only the socket or IO requests that interest the user, and then does its own thing, and then processes the data when it comes, the CPU utilization can be improved.

The IO multiplexing model uses the Reactor design mode to implement this mechanism.

Figure 4 Reactor design mode

As shown in 4, The EventHandler abstract class represents the IO event processor. It owns the IO file Handle (obtained through get_handle) and handle_event (read/write) for Handle ). The subclass inherited from EventHandler can customize the behavior of the event processor. The Reactor class is used to manage EventHandler (registration, deletion, and so on), use handle_events to implement event loops, and continuously call the select multiple separation function of synchronous event multi-channel Splitter (usually kernel, as long as a file handle is activated (readable/written), select returns (BLOCKED). handle_events calls the handle_event of the event processor associated with the file handle for related operations.

Figure 5 IO multiplexing

5. You can use the Reactor method to send the user thread polling IO operation status to the handle_events event loop for processing. After the user thread registers the event processor, it can continue to do other work (asynchronous), while the Reactor thread is responsible for calling the select function of the kernel to check the socket status. When a socket is activated, the corresponding user thread (or the callback function of the user thread) is notified to execute handle_event to read and process data. Because the select function is blocked, the multiplexing model is also called the asynchronous blocking IO model. Note: The blocking mentioned here means that the thread is blocked when the select function is executed, not the socket. Generally, when I/O multiplexing model is used, the socket is set to NONBLOCK, but this does not affect, because when the user initiates an IO request, the data has arrived, user threads will not be blocked.

The pseudocode of the user thread's IO multiplexing model is described as follows:

Void UserEventHandler: handle_event (){

If (can_read (socket )){

Read (socket, buffer );

Process (buffer );

}

}

{

Reactor. register (new UserEventHandler (socket ));

}

You need to override the handle_event function of EventHandler to read and process data. The user thread only needs to register its EventHandler with the Reactor. The pseudocode of the handle_events event loop in the Reactor is roughly as follows.

Reactor: handle_events (){

While (1 ){

Sockets = select ();

For (socket in sockets ){

Get_event_handler (socket). handle_event ();

}

}

}

The event loop continuously calls select to obtain the activated socket, and then obtains the EventHandler and executor handle_event function corresponding to the socket.

IO multiplexing is the most commonly used IO model, but its asynchronous degree is not "thorough", because it is used to block the select system calls of the thread. Therefore, IO multiplexing can only be called asynchronous blocking IO, rather than real asynchronous IO.

Iv. asynchronous IO

"Real" asynchronous IO requires stronger support from the operating system. In the IO multiplexing model, the event loop notifies the user thread of the state event of the file handle, and the user thread reads and processes the data on its own. In the asynchronous IO model, when the user thread receives a notification, the data has been read by the kernel and placed in the buffer specified by the user thread, after I/O is completed, the kernel notifies the user thread to use it directly.

The asynchronous IO model uses the Proactor design mode to implement this mechanism.

Figure 6 Proactor Design Mode

6. The Proactor mode is similar to the Reactor mode in structure, but the user (Client) mode varies greatly. In Reactor mode, the user thread registers an event listener of interest to the Reactor object, and then calls the event processing function when the event is triggered. In Proactor mode, the user thread registers AsynchronousOperation (read/write), Proactor, and CompletionHandler upon completion of the operation to AsynchronousOperationProcessor. AsynchronousOperationProcessor uses the Facade mode to provide a set of Asynchronous Operation APIs (read/write) for users. After the user thread calls the asynchronous API, it continues to execute its own tasks. AsynchronousOperationProcessor enables independent kernel threads to perform asynchronous operations to implement true Asynchronization. When the asynchronous IO operation is complete, AsynchronousOperationProcessor retrieves the Proactor and CompletionHandler registered with the user thread and AsynchronousOperation, and then forwards the CompletionHandler and IO operation result data to the Proactor, the Proactor is used to call back the handle_event function for completing events of each asynchronous operation. Although each Asynchronous Operation in Proactor mode can bind a Proactor object, in general, Proactor is implemented in Singleton mode in the operating system to facilitate centralized distribution of operations to complete events.

Figure 7 asynchronous IO

7. In the asynchronous IO model, the user thread directly uses the asynchronous io api provided by the kernel to initiate a read request, returns immediately after the request is initiated, and continues executing the user thread code. However, the user thread has registered the called AsynchronousOperation and CompletionHandler to the kernel, and then the operating system starts an independent kernel thread to process IO operations. When the data in the read request arrives, the kernel is responsible for reading the data in the socket and writing the data into the user-specified buffer. Finally, the kernel distributes the read data and the CompletionHandler registered by the user thread to the internal Proactor, proactor notifies the user thread of IO completion information (generally by calling the event processing function registered by the user thread) to complete asynchronous IO.

The pseudocode used by the user thread to use the asynchronous IO model is described as follows:

Void UserCompletionHandler: handle_event (buffer ){

Process (buffer );

}

{

Aoi_read (socket, new UserCompletionHandler );

}

You need to override the handle_event function of CompletionHandler to process data. The buffer parameter indicates the data prepared by the Proactor. The user thread directly calls the asynchronous io api provided by the kernel, and register the rewritten CompletionHandler.

Compared with the I/O multiplexing model, asynchronous I/O is not very common. Many high-performance concurrent service programs use the I/O multiplexing model + multi-thread task processing architecture to meet their needs. Moreover, the current operating system's support for asynchronous I/O is not perfect, and more is to use the IO multiplexing model to simulate asynchronous I/O (when I/O events are triggered, the user thread is not directly notified, instead, the data is read and written to the buffer zone specified by the user ). Asynchronous IO has been supported after Java 7, so interested readers can try it.

This article briefly describes the structure and principles of four common high-performance IO models at three levels: basic concepts, workflows, and code examples, clarified the obfuscation-prone concepts of synchronous, asynchronous, blocking, and non-blocking. By understanding the high-performance I/O model, you can select an I/O model that is more in line with the actual business characteristics in the development of the server program to improve the service quality. I hope this article will help you.

Copy the Translation results to Google

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.