Simple, Select, poll and Epoll network programming model implementation and analysis--naïve model

Source: Internet
Author: User
Tags epoll socket error usleep htons

Do Linux network development, generally around the title of several network programming models. There are a lot of good analytical articles on the Internet, and their basic arguments are almost the same. But I don't think they're telling the details, and there's a lack of data support on some key points. So I decided to take a look at these models. (reprint please indicate CSDN blog for breaksoftware)

Before studying these models, I decided to follow the following steps:

    1. Implementing a naïve model
    2. Implementing a test program that sends requests
    3. Implement the Select model to test its efficiency
    4. Implement the poll model and test its efficiency
    5. Implement the Epoll model and test its efficiency
    6. Analyze the performance of each model, analyze and compare its source code
    7. According to the characteristics of each model, modify the above procedures for testing and analysis
The naïve model is the simplest model that we can use when we program. Because there is no exact name to call, I simply call it a simple model. I choose to achieve it first, one is to be easy and difficult, the second is to follow the process of model development, experience the course of technological development. After implementing the naïve model, we are going to implement a test program for sending requests, which will help us to send a large number of requests so that we can test the usability of each model later. We then implement the Select, poll, and Epoll network models.        This order is also the order of technological development, we can analyze the advantages and disadvantages of the previous model, and then in the latter model analysis, see its improvement of these shortcomings, and realize the process of technological progress. In order to facilitate the comparison of the subsequent models, I will reuse the code as much as possible, that is, the modules of the same model function will be implemented using the same function, if it is not reusable, then use parameters to distinguish, but the differentiated code fragment will be small enough.        So, we'll see most of the important code implementation fragments in this article. In order to visually observe the execution of each model, we will start a thread that prints statistics before each model executes
         Err = Init_print_thread ();         if (Err < 0) {                 perror ("Create print thread Error");                 Exit (exit_failure);         }
The Init_print_thread function will be used by each model, and Wait_print_thread is the thread that waits for the print result to exit. Since I am not prepared to let this thread exit, Wait_print_thread is often used to block the main thread.
pthread_t G_print_thread;  int Init_print_thread () {         return pthread_create (&g_print_thread, NULL, print_count, NULL);}  void Wait_print_thread () {         pthread_join (g_print_thread, NULL);}
The Print_count function is an entity that is used for thread execution, and it prints a record every second
static int g_request_count = 0;static int g_server_suc = 0;static int g_client_suc = 0;static int g_read_suc = 0;static in  T g_write_suc = 0;static int g_server_fai = 0;static int g_client_fai = 0;static int g_read_fai = 0;static int G_write_fai        = 0;void* Print_count (void* arg) {struct Timeval cur_time;        int index = 0;        fprintf (stderr, "index\tseconds_micro_seconds\tac\tst\tsr\tsw\tft\tfr\tfw\n");                while (1) {sleep (1);                Gettimeofday (&cur_time, NULL);                                fprintf (stderr, "%d\t%ld\t%d\t%d\t%d\t%d\t%d\t%d\t%d\n", Index,                                Cur_time.tv_sec * 1000000 + cur_time.tv_usec, G_request_count, G_server_suc > G_client_suc?                                G_SERVER_SUC:G_CLIENT_SUC, G_read_suc, G_WRITE_SUC, G_server_fai > G_client_fai? G_serveR_fai:g_client_fai, G_read_fai, G_write_fai);        index++; }}
The above data are defined as follows:
    • G_request_count used to record the total number of requests;
    • G_SERVER_SUC is used to record the number of service behavior successes, with the scenario that the read client succeeds and the packet is sent back successfully
    • G_server_fai is the record service's behavior failure number, the scenario is: 1 Read client failed, 2 read client succeeded but sent back packet failed;
    • The G_CLIENT_SUC is used to record the number of client behavior successes, and the scenario is that the sending package succeeds and the read server package succeeds;
    • G_client_fai is used to record the number of client behavior failures, the scenario is: 1 Send packet failed, 2 send packet succeeds, but the receiving server fails to return the packet;
    • G_READ_SUC is used to record the number of read behavior successes, the scenario is: 1 server read client request packet success, 2 client Read server back packet success;
    • G_read_fai is used to record the number of read behavior failures, the scenario is: 1 server read client request packet failed, 2 client Read server back packet failed;
    • G_WRITE_SUC is used to record the number of successful send behavior, the scenario is: 1 The client sends the request packet to the server succeeds, 2 the server returns the package to the client successfully;
    • G_write_fai is used to record the number of failed send behavior, the scenario is: 1 The client sends the request packet to the server failed; 2 The server fails to return the packet to the client;
Through the data printing, we will know the server and client execution of the process, as well as the problem of the link, as well as the server packet loss situation. Next, we need to create a socket for the client to connect to.
Listen_sock = make_socket (0);
We passed parameter 0 to Make_socket because we do not require that a listener socket be created with an asynchronous property.
Intmake_socket (int asyn) {int listen_sock = -1;int rc = -1;int on = 1;struct sockaddr_in name;listen_sock = socket (AF_INET , Sock_stream, 0), if (Listen_sock < 0) {perror ("Create socket Error"); exit (exit_failure);} rc = setsockopt (Listen_sock, Sol_socket, SO_REUSEADDR, (char*) &on, sizeof (ON)), if (RC < 0) {perror ("setsockopt err or "); exit (exit_failure);} if (Asyn) {rc = IOCTL (Listen_sock, Fionbio, (char*) &on), if (RC < 0) {perror ("IOCTL failed"); exit (exit_failure);}} name.sin_family = Af_inet;name.sin_port = htons (port), name.sin_addr.s_addr = Htonl (Inaddr_any), if (Bind (Listen_sock, ( struct sockaddr*) &name, sizeof (name)) < 0) {perror ("bind error"); exit (exit_failure);} return listen_sock;}
In this function we use the socket function to create a TCP socket.        and bind the socket to a native-specific port using the BIND function. In the naïve model, we make the server a synchronous process. The subsequent connection is not required to have an asynchronous property, so we passed the parameter 0--when we created the socket so that the listener socket does not have an asynchronous feature.        In the Select, poll, and Epoll models that are introduced later, we need the client access connection to be asynchronous, so we pass the parameter 1 so that the listener socket has an asynchronous feature so that the connection through it is asynchronous. After the socket is bound, the server will start listening for client access
if (Listen (Listen_sock, Somaxconn) < 0) {perror ("Listen error"); exit (exit_failure);}
Somaxconn is the maximum number of connections that can be processed at the same time, which is a system macro.        It has a value of 128 on my system. Finally, we receive and process the client's request in a dead loop
while (1) {int new_sock;new_sock = accept (listen_sock, NULL, NULL), if (New_sock < 0) {perror ("accept Error"); Exit (Exit_ FAILURE);}
Through accept we will get access to the socket. If the socket value is legitimate, we need to increase the number of requests received by 1
Request_add (1);
The Request_add function will be called later in different models and test programs, and will be called in different threads. So here we introduce a multi-threaded problem. I'm not going to use locks and other methods, but rather use simple atomic manipulation.
void Request_add (int count) {__sync_fetch_and_add (&g_request_count, count);}
Because the naïve pattern we designed is a synchronous process, the socket that is plugged in is not asynchronous. When something special happens, the subsequent behavior of reading the contents of the socket or writing to the socket may get stuck. This will cause the entire service to get stuck, which is something we don't want to see. So we need to set the Operation timeout property on the synchronous socket.
Set_block_filedes_timeout (New_sock);
Voidset_block_filedes_timeout (int filedes) {struct Timeval tv_out, tv_in;tv_in.tv_sec = read_timeout_s;    Tv_in.tv_usec = Read_timeout_us;if (setsockopt (Filedes, Sol_socket, So_rcvtimeo, &tv_in, sizeof (tv_in)) < 0) { Perror ("Set RCV timeout Error"); exit (exit_failure);} Tv_out.tv_sec = write_timeout_s;    Tv_out.tv_usec = Write_timeout_us;if (setsockopt (Filedes, Sol_socket, So_sndtimeo, &tv_out, sizeof (tv_out)) < 0) {perror ("Set RCV timeout Error"); exit (exit_failure);}}
Here to illustrate, I've seen a lot of people on the Internet asking how to set the Timeout property to be invalid by the above method. In fact, they made the mistake of setting the socket to an asynchronous property.        If the socket is set to an asynchronous property and a timeout is set, the socket is, of course, executed asynchronously, and the timeout setting is invalid. Another problem is that some students have a "deadlock" problem (a deadlock that is not strictly defined) when they design their servers and clients. That's because the server and client are both synchronized, and the socket does not have a timeout set. This allows the server to enter read after the client invokes write, which is also the read state, resulting in a "deadlock". However, this problem does not occur frequently, because most students give a large cache when read, and think that the content read can be read at once. Without taking into account the fact that a read operation may not be able to read all the data, such as the following implementation
while (Nbytes > 0) {nbytes = recv (filedes, buffer, sizeof (buffer)-1, 0); if (nbytes > 0) {total_length_recv + = Nbyt es;} Buffer[nbytes] = 0;//fprintf (stderr, "%s", buffer);
This server read operation takes into account the possibility of not reading all the data at once. But if the client sends out the data, the server can read all the data recv the first time. Since the read data is greater than 0, then again into the read operation, this time, the client is already in the read server return stage. "Deadlock" occurs because the socket is synchronized and the timeout is not set, causing the server to remain stuck in the read again operation. In fact, the process is very interesting, and when we reinforce a piece of code that is not robust, it often falls into another pit.        But as long as we try to jump out of the pit, we will be enlightened and recognize a lot of other people's neglected problems. Let's get back to the point, after we set the socket Timeout property, we start to let the server read the client's input, and if the input reads successfully, return the package to the client. The last server shuts down the secondary connection
if (0 = = Server_read (new_sock)) {server_write (new_sock);} Close (New_sock);
The Server_read method calls the Read_data method at the bottom, and the Read_data method is one of the two key behaviors of our entire code
intis_ Nonblock (int fd) {int flags = FCNTL (FD, F_GETFL), if (flags = =-1) {perror ("Get FD Flags Error"); exit (exit_failure);} Return (Flags & O_nonblock)? 1:0;} Intread_data (int filedes, int from_server) {char buffer[maxmsg];int nbytes;int total_len_recv;int wait_count = 0;int rec_ suc = 0;TOTAL_LEN_RECV = 0;while (1) {nbytes = recv (filedes, buffer, sizeof (buffer)-1, 0), if (Nbytes < 0) {if (Is_non Block (Filedes)) {if (Eagain = = errno | | Ewouldblock = = errno | | Eintr = = errno) {if (Wait_count < Wait_count_max) {wait_count++;usleep (wait_count); continue;}}} break;} if (nbytes = = 0) {//fprintf (stderr, "read end\n"); else if (nbytes > 0) {total_len_recv + = nbytes;//buffer[nbytes] = 0;//fprintf (stderr, "%s", buffer);} if ((From_server && is_server_recv_finish (total_len_recv)) | | (!from_server && is_client_recv_finish (TOTAL_LEN_RECV))) {rec_suc = 1;break;}} 
Read_data behavior is divided into two versions of the client and server implementations, the basic logic is the same. We consider that the read operation may not be readable at one time, so we use the while loop to continue trying to read. In the case of an asynchronous socket, we consider the RECV function to return a scene that is less than 0 o'clock various error values and to try it multiple times with a gradual wait. If the socket is synchronous, the read operation is exited once the recv return value is less than 0.        The TOTAL_LEN_RECV function is used to count the length of a total read, and then determine whether the read operation is completed by combining this length with whether it is the server or the client's identity. When the read operation is finished, we want to count the behavior of the read operation and its identity throughout the process.
if (from_server) {if (REC_SUC) {__sync_fetch_and_add (&G_READ_SUC, 1); return 0;} else {__sync_fetch_and_add (&g_ Read_fai, 1); __sync_fetch_and_add (&g_server_fai, 1); return-1;}} else {if (REC_SUC) {__sync_fetch_and_add (&G_READ_SUC, 1); __sync_fetch_and_add (&G_CLIENT_SUC, 1); return 0;} else {__sync_fetch_and_add (&g_read_fai, 1); __sync_fetch_and_add (&g_client_fai, 1); return-1;}}}
If the read operation succeeds, the send operation occurs. The Server_write method calls the Write_data method at the bottom.
Intwrite_data (int filedes, int from_server) {int nbytes;int total_len_send;int wait_count = 0;int index;int send_suc = 0;t Otal_len_send = 0;index = 0;while (1) {if (from_server) {nbytes = Send (Filedes, get_server_send_ptr (index), Get_server_sen D_len (index), 0);} else {nbytes = Send (Filedes, get_client_send_ptr (index), Get_client_send_len (index), 0);} if (Nbytes < 0) {if (Is_nonblock (filedes)) {if (Eagain = = errno | | Ewouldblock = = errno | | Eintr = = errno) {if (Wait_count < Wait_count_max) {wait_count++;usleep (wait_count); continue;}}} break;} else if (nbytes = = 0) {break;} else if (nbytes > 0) {total_len_send + = nbytes;} if ((From_server && is_server_send_finish (total_len_send)) | | (!from_server && is_client_send_finish (total_len_send))) {send_suc = 1;break;}}
The implementation and Read_data ideas are consistent, also take into account the situation of one-time write and synchronous asynchronous socket problem. When the write operation is complete, then the statistics related behavior
if (from_server) {if (SEND_SUC) {__sync_fetch_and_add (&G_WRITE_SUC, 1); __sync_fetch_and_add (&G_SERVER_SUC, 1 ); return 0;} else {__sync_fetch_and_add (&g_write_fai, 1); __sync_fetch_and_add (&g_server_fai, 1); return-1;}} else {if (SEND_SUC) {__sync_fetch_and_add (&G_WRITE_SUC, 1); return 0;} else {__sync_fetch_and_add (&g_write_fai , 1); __sync_fetch_and_add (&g_client_fai, 1); return-1;}}}
Finally, we talk about the implementation of the test program. To facilitate testing, I asked the tester to accept at least 2 parameters, the first parameter being used to identify how many threads to send the request, and the second parameter to specify how many milliseconds to wait in the thread to send the request, and the third parameter to be optional, identifying how many requests are sent altogether. So we can control the behavior of the test program with these parameters.
#define Maxrequestcount 100000static int g_total = 0;static int g_max_total = 0;void* send_data (void* arg) {int wait_time; int client_sock;wait_time = * (int*) arg;while (__sync_fetch_and_add (&g_total, 1) < G_max_total) {Usleep (wait_ time); client_sock = Make_client_socket (); Connect_server (Client_sock); Request_add (1); Set_block_filedes_timeout ( Client_sock); if (0 = = Client_write (client_sock)) {client_read (client_sock);} Close (client_sock); client_sock = 0;}} int main (int argc, char* argv[]) {int thread_count;int index;int err;int wait_time;pthread_t thread_id;if (argc < 3) {F printf (stderr, "error! Example:client 50\n "); return 0;} Err = Init_print_thread (), if (Err < 0) {perror ("Create print thread Error"); exit (exit_failure);} Thread_count = Atoi (argv[1]); wait_time = Atoi (argv[2]); g_max_total = Maxrequestcount;if (argc > 3) {g_max_total = Atoi ( ARGV[3]);} for (index = 0; index < thread_count; index++) {err = Pthread_create (&thread_id, NULL, Send_data, &wait_time); I F (Err ! = 0) {perror ("can ' t create send thread"); exit (exit_failure);}} Wait_print_thread (); return 0;}
Thread, the socket is first created via Make_client_socket and bound to the local port
Intmake_client_socket () {int client_sock = -1;struct sockaddr_in client_addr;client_sock = socket (AF_INET, SOCK_STREAM, 0); if (Client_sock < 0) {perror ("Create socket Error"); exit (exit_failure);} Bzero (&client_addr, sizeof (CLIENT_ADDR)); client_addr.sin_family = Af_inet;client_addr.sin_addr.s_addr = Htons ( Inaddr_any); client_addr.sin_port = htons (0); if (Bind (Client_sock, (struct sockaddr*) &client_addr, sizeof (Client_ Addr)) < 0) {perror ("bind error"); exit (exit_failure);} return client_sock;}
Then connect to the server via Connect_server
void Connect_server (int client_sock) {struct sockaddr_in server_addr;bzero (&server_addr, sizeof (SERVER_ADDR)); server_addr.sin_family = Af_inet;if (Inet_aton ("127.0.0.1", &server_addr.sin_addr) = = 0) {perror ("Set server IP Error "); exit (exit_failure);} Server_addr.sin_port = htons (port), if (Connect (client_sock, (struct sockaddr*) &server_addr, sizeof (SERVER_ADDR)) < 0) {perror ("Client Connect server Error"); exit (exit_failure);}}
Finally communicate with the server via Client_write and Client_read. Both of these functions call the Write_data and Read_data described above, so there's nothing to talk about.
Intclient_read (int filedes) {return read_data (filedes, 0);} Intclient_write (int filedes) {return write_data (filedes, 0);}
We started 1000 threads and sent 300,000 requests.        Look at the processing power of the naïve model. First we look at the results of the server printing
It can be found that the stable processing capacity is about 14000~15000 per second. Let's look at the client's print again.
We find that the frequency of transmission is almost 14000~16000. Here to illustrate, because the client is the synchronization model, the server is also the synchronization model, so this rate is the server processing peak. Otherwise, with 1 microseconds of waiting time set, 1000 threads must send more than 15000 requests per second.        I used two test processes at the same time to press, and also verified that its maximum processing capacity is around 14000~15000 (in my configuration environment). We found that it is very convenient to use the naïve model to realize network communication. But the obvious drawback of this model is that only one request can be processed at a time-receiving requests, reading sockets, and writing sockets serially. Unless you use a thread pool to optimize this process, it doesn't seem to solve this problem in a single-threaded scenario. Technology is always progressing, and we will be explaining the Select model in the next section, which will solve the problem.

Simple, Select, poll and Epoll network programming model implementation and analysis--naïve model

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.