Socket programming practices in Linux (7) I/O multiplexing technology-select model

Source: Internet
Author: User
Tags socket error

Socket programming practices in Linux (7) I/O multiplexing technology-select model

Before entering the topic of today's select model, let's take a look at the five I/O models:

(1) blocking I/O (this method is used by default)

In server socket programming, our common accpet and recv functions are blocked. Taking recv as an example: When the upper-layer application calls the recv system call, if the peer does not send data (there is no data in the Linux kernel buffer), the upper-layer application Application1 will be blocked; when the peer sends the data and the buffer data in the Linux Kernel recv arrives, the kernel copies the data to the user space. Then, the upper-layer application removes the blocking and performs the next step.

(2) non-blocking I/O (not recommended)

 

 

If the upper-layer application is in non-blocking mode, the recv function is called cyclically to accept data. If there is no data in the buffer, the upper-layer applications will not be blocked. The recv return value is-1, and the error code is EWOULDBLOCK (marked incorrectly in the figure ). The upper-layer applications are constantly polling for data arrival. The upper-layer applications are busy waiting. High CPU consumption. Therefore, the non-blocking mode is rarely used directly. It has a small application scope and is generally used in combination with IO reuse.

(3) signal-driven I/O model (not frequently used)

 

The upper-layer application establishes the SIGIO signal processing program. When data arrives in the buffer zone, the kernel sends a signal to the upper-layer App. When the upper-layer App receives the signal, it calls the recv function. Because the buffer zone contains data, the recv function is generally not blocked. However, this model is rarely used. It is a typical "Pull Mode (upper-layer applications passively pull data from Linux kernel space )". That is, when the upper-layer App needs to call the recv function to pull data in, there will be a time delay. We cannot avoid the generation of new signals when the delay occurs, which is also a defect.

(4) asynchronous I/O (not commonly used)

The upper-layer application calls the aio_read function and submits a buffer buf at the application layer. After the call is completed, the request is not blocked. The upper-layer application App can continue other tasks. When the TCP/IP protocol buffer has data, Linux actively copies the kernel data to the user space. Then, send a signal to the upper-layer App, telling the App that the data has arrived and needs to be processed!

Asynchronous IO is a typical "Push mode" and is the most efficient mode. The upper-layer application App can process it asynchronously (with the support of the Linux kernel, while processing other tasks, it can also support IO communication, which is similar to IOCP on the Windows platform ).

(5) select model for I/O reuse (the focus of this article)

 

Imagine what would happen if you encounter the following problems?

1) In addition to responding to the client service, the server must be able to accept standard input commands for management.

If the above blocking method is used, the accept and read calls must be sequential in a single thread, and they are all blocked. For example, if you call accept first and then read, the server will always block accept if there is no client request. If you do not have the chance to call read, you will not be able to respond to standard input commands.

2) The server must provide a large number of client request services.

If the blocking method is used, because accept and recev are both blocked in a single thread, a client may be blocked when sending a message after being accept by the server, therefore, the server will block the recev call. At this time, other clients connect and cannot respond.

 

Select is required! Select implements the function of a manager: Use select to manage multiple IO. Once one or more IO detects the events we are interested in, select returns, the returned value is the number of detected events, which ranges from 2nd ~ The four parameters return the events sent by IO, so that we can traverse these events and process them.

Some people say that I can use multiple threads? However, on the UNIX platform, the multi-process model is good at handling concurrent persistent connections, but it is not applicable to frequent and closed connections. Of course, the select statement is not the most efficient and has O (N) time complexity. I will continue to explain more about the more efficient epoll in my blog. Thank you for your attention, ╰ ( ̄) ╮

 

#include 
 
    #include 
  
     #include 
   
      #include 
    
       int select(int nfds, fd_set *readfds, fd_set *writefds,             fd_set *exceptfds, struct timeval *timeout);
    
   
  
 

Nfds: is the highest-numbered file descriptor in any of the three sets, plus 1 [read, write, maximum file descriptor in the exception set + 1].

Fd_set [four macros are used to operate fd_set]

FD_CLR (int fd, fd_set * set );

FD_ISSET (int fd, fd_set * set );

FD_SET (int fd, fd_set * set );

FD_ZERO (fd_set * set );

Timeout [the maximum waiting time that will be experienced from the start of the call to the return of the select statement. Note that this refers to the relative time]

/Timeval structure: struct timeval {long TV _sec;/* seconds */long TV _usec;/* microseconds */}; // some calls use three empty sets, where n is 0, A non-empty timeout to achieve more accurate sleep.

In Linux, the select function changes the timeout value to indicate the remaining time, but many implementations do not change the timeout value.

For better portability, timeout is usually re-assigned to the initial value in a loop.

Timeout value:

Timeout = NULL

Infinite wait.-1 is returned when the signal is interrupted, and errno is set to EINTR.

Timeout-> TV _sec = 0 & tvptr-> TV _usec = 0

Do not wait to return immediately

Timeout-> TV _sec! = 0 | tvptr-> TV _usec! = 0

Wait for a specific length of time and return 0 for timeout

NOTE: For the usage of select to set timeout, refer to my other blog http://blog.csdn.net/nk_test/article/details/49050379

Return Value:

If the operation succeeds, the number of descriptors in all sets is returned. If the operation times out, 0 is returned. If an error occurs,-1 is returned.


The following describes how to use select to improve the programs on the server and client, and solve the two problems mentioned above:

Server:

 

/* Example 1: Use select to improve the echoClient function of the echo server client so that multiple file descriptors can be monitored simultaneously in the case of a single process; */void echoClient (int sockfd) {char buf [512]; fd_set rset; // ensure that the standard input is not redirected to int fd_stdin = fileno (stdin); int maxfd = fd_stdin> sockfd? Fd_stdin: sockfd; while (true) {FD_ZERO (& rset); // monitors two I/O FD_SET (fd_stdin, & rset); FD_SET (sockfd, & rset ); int nReady = select (maxfd + 1, & rset, NULL); // NULL if (nReady =-1) is not required) err_exit ("select error"); else if (nReady = 0) continue;/** nReady> 0: readable event detected **/if (FD_ISSET (fd_stdin, & rset) {memset (buf, 0, sizeof (buf); if (fgets (buf, sizeof (buf), stdin) = NULL) break; if (writen (sockfd, buf, strlen (buf) =-1) err_exit ("write socket error");} if (FD_ISSET (sockfd, & rset )) {memset (buf, 0, sizeof (buf); int readBytes = readline (sockfd, buf, sizeof (buf); if (readBytes = 0) {cerr <"server connect closed... "<endl; exit (EXIT_FAILURE);} else if (readBytes =-1) err_exit (" read-line socket error "); cout <buf ;}}}

Client:

 

 

/* Example 2: Use select to improve the code of the server that returns echo to the server to accept connections and process connections: This allows you to process multi-client connections in the case of a single process, for single-core CPUs, the efficiency of a single process using select to process connections and listen to sockets is not necessarily worse than that of multiple processes/multithreading; */struct sockaddr_in clientAddr; socklen_t addrLen; int maxfd = listenfd; fd_set rset; fd_set allset; FD_ZERO (& rset); FD_ZERO (& allset); FD_SET (listenfd, & allset); int client [FD_SETSIZE]; // used to save the connected client socket for (int I = 0; I <FD_SETSIZE; ++ I) client [I] =-1; int maxi = 0; // use Used to store the largest non-idle position. It is used to traverse the array while (true) {rset = allset; int nReady = select (maxfd + 1, & rset, NULL, NULL, NULL); if (nReady =-1) {if (errno = EINTR) continue; err_exit ("select error");} // nReady = 0 indicates timeout, however, else if (nReady = 0) continue will not occur here; if (FD_ISSET (listenfd, & rset) {addrLen = sizeof (clientAddr ); int connfd = accept (listenfd, (struct sockaddr *) & clientAddr, & addrLen); if (co Nnfd =-1) err_exit ("accept error"); int I; for (I = 0; I <FD_SETSIZE; ++ I) {if (client [I] <0) {client [I] = connfd; if (I> maxi) maxi = I; break ;}} if (I = FD_SETSIZE) {cerr <"too login clients" <endl; exit (EXIT_FAILURE);} // print the Client IP address and port number cout <"Client information:" <inet_ntoa (clientAddr. sin_addr) <"," <ntohs (clientAddr. sin_port) <endl; // puts the connection set interface into allset and updates maxfd FD_SET (co Nnfd, & allset); if (connfd> maxfd) maxfd = connfd; if (-- nReady <= 0) continue ;} /** if a read event occurs for a connected interface **/for (int I = 0; I <= maxi; ++ I) if (client [I]! =-1) & FD_ISSET (client [I], & rset) {char buf [512] = {0}; int readBytes = readline (client [I], buf, sizeof (buf); if (readBytes =-1) err_exit ("readline error"); else if (readBytes = 0) {cerr <"client connect closed... "<endl; FD_CLR (client [I], & allset); close (client [I]); client [I] =-1 ;}// note the following: after the Server obtains data from the Client, it does not immediately reply back. // instead, it waits for four seconds and then performs the echo sleep (4); cout <buf; if (writen (client [I], buf, readBytes) =-1) err_exit ("writen error"); if (-- nReady <= 0) break ;}}

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.