There are five basic I/O models in UNIX:

Source: Internet
Author: User

1. Block I/O
2. Non-blocking I/O
3. I/O multiplexing (select and poll)
4. Signal-driven I/O (sigio)
5. asynchronous I/O (posix.1 AIO _ series functions)

An input operation in UNIX generally has two different stages:
1. Wait for the data to be ready.
2. Copy data from the kernel to the process.
For input operations on a sockt, the first step is to wait for the data to arrive at the network. When the group arrives, it is copied to a buffer in the kernel, the second step is to copy data from the kernel buffer to the application buffer.

The following describes the five class I/O models mentioned above.

In this article, we use UDP for example, and we regard the function recvfrom as a system call, so that our attention is focused on the I/O model.

Block I/O model

The most popular I/O model is the blocking I/O model, which is short of time and all sockts are blocked. This means that when a sockt call cannot be completed immediately, the process enters the sleep state, wait until the operation is completed. :

 


Figure 1 blocking I/O model

In Figure 1, the process calls recvfrom, Which is returned only when the datagram arrives and is copied to the application buffer zone or when an error occurs. The most common error is that the system call is interrupted by a signal. The whole process blocking period refers to the period from the time when recvfrom the call is made to the time it returns. When the process returns a successful indication, the application process starts to process the datagram.

The following is a simple server-side code written using the blocking I/O model (this code is used for UNIX Network Programming). The function of this Code is to send data from the client back to the client, in this example, the process is blocked on recvfrom.

# Include "unp. H"

Void dg_echo (INT sockfd, Sa * pcliaddr, socklen_t clilen );

Int
Main (INT argc, char ** argv)
{
Int sockfd;
Struct sockaddr_in servaddr, cliaddr;
Sockfd = socket (af_inet, sock_dgram, 0 );
Bzero (& servaddr, sizeof (servaddr ));
Servaddr. sin_family = af_inet;
Servaddr. sin_addr.s_addr = htonl (inaddr_any );
Servaddr. sin_port = htons (serv_port );
BIND (sockfd, (Sa *) & servaddr, sizeof (servaddr ));
Dg_echo (sockfd, (Sa *) & cliaddr, sizeof (cliaddr ));
}

Void dg_echo (INT sockfd, Sa * pcliaddr, socklen_t clilen)
{
Int N;
Socklen_t Len;
Char mesg [maxline];
For (;;){
Len = clilen;
N = recvfrom (sockfd, mesg, maxline, 0, pcliaddr, & Len );
Sendto (sockfd, mesg, N, 0, pcliaddr, Len );
}
}

Non-blocking I/O model

When we set a sockt to a non-blocking release, that is, to notify the kernel: When the requested I/O operation must make the process sleep to complete, do not let the process sleep, an error should be returned. :

 

Figure 2 non-blocking I/O model

As shown in 2, no data is returned when recvfrom is called for the first three times, so the kernel immediately returns an ewouldblock error. When recvfrom is called for 4th times, the datagram has been prepared and copied to the application buffer zone. The success indicator of recvfrom is returned, followed by the process of the datagram.
When an application process calls recvfrom in a non-blocking sockt loop like this, we call this process polling ). the application process continuously queries the kernel to see if a certain operation is ready. This is a great waste of CPU, but this model is only occasionally encountered.

I/O Reuse Model

I/O multiplexing allows one or more I/O conditions to be met (for example, the input is ready to be read, or the description can undertake more output, we will be notified. I/O multiplexing is supported by select and poll, and newer posix.1g are also supported (pselect ).
I/O multiplexing is typically used in the following network applications:
1. It must be used when the customer processes multiple descriptive words.
2. One Customer processes multiple sockt at the same time.
3. It is generally used if a server needs to process both the listening sockt and the connection sockt.
4. If a server needs to process both TCP and UDP, it is generally used.
5. If a server needs to process multiple services or protocols (such as the inetd daemon), it is generally used.

I/O multiplexing is not limited to network programming. Many applications also need this technology.

With I/O reuse, we can call select or poll, blocking one of the two system calls without blocking the real I/O system calls. Figure 3 is a summary of the I/O Reuse Model.

 

 

Figure 3 I/O multiplexing Model

We block select calls and wait for the datagram socket to be readable. When select returns the socket readable condition, we call recvfrom to copy the datagram to the application cache.
Comparing Figure 3 with figure 1 does not seem to show any superiority. In fact, because select is used, this system is required to be called rather than once. It seems that there is a little difference in the change, but the benefit of select is that we can wait for multiple descriptive words to be ready.

Signal-driven I/O model

The signal-driven I/O model can notify us with the signal sigio when the descriptive word is ready. An example is given:

 

 
(Http://blog.sina.com.cn/s/blog_5f4344bf0100cklt.html)-5 Basic I/O models (UNIX Network Programming) _ fly life _ Sina Blog

Figure 4 signal-driven I/O model
First, we allow sockt to drive I/O, and install a signal processing program by calling sigaction. This system call returns immediately, and the process continues to work. It is non-blocking. When the datagram is ready to be read, a sigio signal is generated for the process. We can then call recvfrom in the signal processing program to read the datagram, and notify the main loop that the data is ready for processing, or notify the main loop to process the datagram.

Regardless of how we process sigio signals, the benefit of this model is that it does not block when waiting for data to be reported. The main loop can continue to be executed, only waiting for the notification from the signal processing program: the datagram is ready to be processed, or the datagram is ready to be read.

 

Asynchronous I/O model

The asynchronous I/O model is a new content in posix.1 version 1993. Let's start the kernel and notify us after the entire operation is completed (including copying the datagram from the kernel to our own buffer. The main difference between this model and the signal-driven model is that the signal-driven I/O has a kernel to notify us when an I/O operation can be started, the asynchronous I/O model informs us when I/O operations are completed by the kernel. Figure 5 provides an example.

 

Figure 5 asynchronous I/O model

We call aio_red (POSIX asynchronous I/O function starts with Aio _ or Lio _) and pass the description, buffer pointer, and buffer size to the kernel (three parameters the same as red), file offset (similar to lseek), and how the master book kernel notifies us when the entire operation is completed. This system call returns immediately, and our process is not blocked waiting for the completion of the I/O operation. In this example, we assume that the kernel is required to generate a signal when the operation is complete. This signal is generated until the data has been copied to the application buffer zone, this is different from the signal-driven I/O model.

Comparison of various I/O models

 

Figure 6 Comparison of various I/O models

Figure 6 compares the I/O models in Figure 5 above. It indicates that the differences between the first four models are in stage 1st, because the first four models have the same stage 2nd: when data is copied from the kernel to the caller's buffer zone, the process is blocked by calling recvfrom. However, the two phases of asynchronous I/O processing are different from those of the first four models.

Synchronous I/O and asynchronous I/O

 

Posix.1 defines the two terms as follows:

1. the synchronous I/O operation causes the request process to be blocked until the I/O operation is completed.

2. asynchronous I/O operations do not cause request process blocking.

According to the above definition, our first four I/O models are all synchronous I/O models, because the real I/O operation (recvfrom) blocks the process, only the asynchronous I/O model is consistent with the definition of asynchronous I/O.

 

--------------------------------------------------------------------------------

 

 

--------------------------------------------------------------------------------

Synchronous I/O and asynchronous I/O; blocking I/O and non-blocking I/O

When reading data, if no data is readable at this time, blocking I/O will always wait for data reading, and the data will be returned after the data is copied from the kernel to the buffer of the socket; non-blocking I/O will be returned immediately, but if there is data readable, non-blocking I/O will also be returned after data is copied from the kernel to the buffer of the socket.

 

The above is the difference between blocking and non-blocking I/O, but the read operations in the above two examples are synchronous, isn't it strange? Is it synchronous? The definition of synchronization is clear:

• A synchronous I/O operation causes the requesting process to be blocked until that I/O operation completes.

• An asynchronous I/O operation does not cause the requesting process to be blocked.

 

Now we can take an asynchronous example to understand it. It is also a read operation. when data is readable, asynchronous I/O returns immediately. After the kernel copies the data to the socket buffer, the read operation will be completed by the program such as events.

The key is to copy data from the kernel to the socket buffer. Whether the read operation is blocked or not, blocking I/O is blocked, and asynchronous I/O is not blocked.

 

This article from the csdn blog, reproduced please indicate the source: http://blog.csdn.net/outsinre/archive/2010/06/17/5675264.aspx

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.