High-performance IO Design Mode: blocking/non-blocking, synchronous/asynchronous parsing, io Design Mode

Source: Internet
Author: User

High-performance IO Design Mode: blocking/non-blocking, synchronous/asynchronous parsing, io Design Mode

When it comes to high performance, I think everyone like this. Today we will mainly understand several key concepts in high performance I/O design, the most important first step in doing anything is to make the concept clear, isn't it? Here is: blocking, non-blocking, synchronous, asynchronous.

OK. Now let's take a look.

1. blocking and non-blocking are different methods for processes to access data according to the ready status of IO operations. To put it bluntly, they are an implementation method for reading or writing operation functions, reading or writing a function in blocking mode will wait until the function is read or written, but not in blocking mode, a status value will be returned immediately after the function is read or written.

2. synchronization and Asynchronization are for the interaction between applications and the kernel. Synchronization means that the user process triggers IO operations and waits or polls to check whether the IO operations are ready, asynchronous means that the user process starts to do his/her own things after triggering the IO operation, and when the IO operation is completed, it will receive the notification of IO completion (Asynchronous is characterized by notification ).

Generally, I/O models can be divided into synchronous blocking, synchronous non-blocking, asynchronous blocking, and asynchronous non-blocking IO.

 

Synchronous blocking IO:
In this way, after initiating an IO operation, the user process must wait for the completion of the IO operation. Only after the IO operation is completed can the user process run. This method is applicable to traditional java I/O models!

Synchronous non-blocking IO:
In this way, the user process can return other things after initiating an IO operation, but the user process needs to ask from time to time whether the IO operation is ready, this requires the user process to keep asking, so as to introduce unnecessary CPU resource waste. Currently, java nio is a non-blocking synchronous IO.


Asynchronous blocking IO:
This method means that after an application initiates an IO operation, it does not wait until the kernel IO operation is completed. After the kernel completes the IO operation, it will notify the application, this is actually the key difference between synchronization and Asynchronization. Synchronization must wait or actively ask whether IO is completed. Why is it blocking? In this case, the select function is called by the select system, and the implementation method of the select function is blocking, the benefit of using the select function is that it can listen to multiple file handles at the same time (from the perspective of UNP, select is a synchronization operation. Because after the select statement, the process also needs to read and write data) to improve the concurrency of the system!


Asynchronous non-blocking IO:
In this mode, the user process only needs to initiate an IO operation and then return immediately. After the IO operation is completed, the application will be notified of the completion of the IO operation, at this time, the user process only needs to process the data, and does not need to perform actual IO read/write operations, because the real IO read or write operations have been completed by the kernel. Currently, Java does not support this IO model.

 

When it comes to blocking, I/O waits first. I/O wait is inevitable, so now there is a wait, there will be blocking, but note, we say blocking means that the process currently initiating the I/O operation is blocked
Synchronous blocking I/O means that when a process calls a system call or library function that involves an I/O operation, such as accept () (Note that accept is also included in the I/o operation), send (), recv (), and so on. The process is paused and continues to run after the I/O operation is completed. This is a simple and
Effective I/O model, which can be combined with multiple processes to effectively use CPU resources, but the cost is the high memory overhead of multiple processes.


Synchronous blocking process sitting on water, you cannot burn porridge
Synchronous non-blocking is similar to using a process to take water and burn porridge. while (true) {if... if ...} the advantage is that a process processes multiple I/o requests. the disadvantage is that you need to keep polling.

The difference is that you do not wait for the data to be ready because the data occupies 80% of the waiting time. The advantage of non-blocking synchronization is that a process processes multiple I/O operations at the same time.

In synchronous blocking I/O, the actual waiting time of a process may include two parts: one is waiting for data to be ready, and the other is waiting for data.
For network I/O, Data Replication may take longer.
The difference is that the call to synchronize non-blocking I/O will not wait for the data to be ready. If the data cannot be read or written, it will return immediately
The process is returned.

For example, when we use non-blocking recv () to receive network data, if no data is available in the NIC buffer, the function will return in time to tell the process that no data is readable. Compared with blocking I/O, this kind of non-blocking I/O is combined with repeated round-robin attempts.
Whether the data is ready to prevent the process from being blocked. The biggest benefit is that multiple I/O operations can be processed simultaneously in a process. However, it is precisely because the process needs to perform multiple round robin to check whether the data is ready, which takes a lot of CPU time, making the process in a busy waiting state.

Non-blocking I/O is generally only valid for network I/O. We only need to use O_NONBLOCK in the socket option settings, so that the send () or recv () the non-blocking mode is adopted.
If the server wants to receive data from multiple TCP connections at the same time, it must call the method of receiving data for each socket in turn, such as recv (). No matter whether these sockets have data that can be received or not, you have to ask again. If most sockets do not have data that can be received, the process will waste a lot of CPU time to check these sockets, this is clearly not what we want to see.

Synchronous and asynchronous, blocking, and non-blocking are mixed. In fact, they are not the same thing, and their modified objects are also different.
Blocking and non-blocking means whether the process needs to wait when the data accessed by the process is not ready. In short, this is equivalent to the implementation difference within the function, that is, whether to return directly or wait for the ready state when it is not ready;

Synchronization and Asynchronization refer to the data access mechanism. Synchronization generally refers to the method of actively requesting and waiting for the completion of I/O operations, when the data is ready, it must be blocked when reading and writing data (the difference between readiness and read/write is two stages, synchronous read/write must be blocked). asynchronous means that other tasks can be processed after the data is actively requested, then wait for the I/O to complete the operation notification, which can enable the process to read and write data without blocking. (Wait for "notification ")


Multi-channel Readiness: 1. emphasize multiple channels. 2. Check whether the request data is ready only. Do not read or write data for I/o.
Epoll targets such scenarios.
Select and epoll only require the process to passively receive the data ready "notification". It complies with the asynchronous definition and does not need to wait (synchronous blocking) or round-robin (synchronous non-blocking ).

 

Conclusion: Both blocking and non-blocking can be understood as the concept of synchronization. For Asynchronization, non-blocking will not be implemented. After receiving an asynchronous notification, the user process can directly operate the data in the user State Space of the process. Synchronization and Asynchronization are relative to the interaction between the application and the kernel. synchronous requests must be actively asked. In asynchronous mode, the kernel notifies the application when an IO event occurs, blocking and non-blocking are only the implementation methods of functions when the system calls the system.

Synchronization and Asynchronization are relative to the interaction between the application and the kernel. synchronous requests must be actively asked. In asynchronous mode, the kernel notifies the application when an IO event occurs, blocking and non-blocking are only the implementation methods of functions when the system calls the system.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.