The purpose of multi-process and multi-threading is to maximize the use of CPU resources, when a process does not need to consume too much CPU resources, but requires I/O resources, I/O multiplexing can be used, the basic idea is to let the kernel to suspend the process until an I/O event occurs, and then return control to the program. The efficiency of this event-driven model is that it eliminates the overhead of process and thread context switching. The entire program runs in a single process context, and all logical streams share the entire process's address space. The disadvantage is that coding is complex and coding complexity continues to rise as the granularity of each logical stream is reduced.
Typical applications for I/O multiplexing (excerpt from UNP6.1)
The model of select is an implementation that puts each client's request into the event queue and the main thread handles them through non-blocking I/O.
Select detailed usage and fd_set structure see: UNP's CH6
Several tips
1, select during the waiting period will be captured by the process of signal interruption, from a rigorous point of view, should be handled good EINTR error
2, the kernel actually supports the time resolution than the TIMEVAL structure of microsecond-level coarse
3. Select returns the total number of descriptive bits that are ready each time, and sets the non-ready bits to 0 (three fd_set parameters are value-result parameters) so you need to reset all the collections 1 each time you call select again
An example of a server-side program:
#include "simon_socket.h" #define Serv_port 12345#define fdset_size 32typedef struct clientinfo{int fd;struct sockaddr_ In addr;} clientinfo;typedef struct clientpool{int count; Clientinfo cinfo_set[fdset_size];} Clientpool;void Init_clientpool (Clientpool *pool) {int i;pool->count = 0;memset (pool->cinfo_set, 0, sizeof (pool- >cinfo_set)); for (i = 0; i < fdset_size; i++) (Pool->cinfo_set[i]). FD =-1;} void Add_clientinfo (Clientpool *pool, int newfd, struct sockaddr_in client)//Change{int i;for (i = 0; i < fdset_size; i++) {if (POOL->CINFO_SET[I].FD < 0) {POOL->CINFO_SET[POOL->COUNT].FD = newfd;memcpy ((char*) & (pool- >CINFO_SET[POOL->COUNT].ADDR), (char*) &client, sizeof (struct sockaddr_in));p Ool->count++;break;}}} int process_cli (Clientinfo cli) {int recv_bytes, send_bytes;if ((recv_bytes = recv (CLI.FD, Recv_buf, max_buf_size, 0)) < ; 0) {perror ("Fail to recieve Data");} else if (!recv_bytes) return-1;printf ("Success to recieve%d bytes data from%s:%d\n%S\n ", Recv_bytes, Inet_ntoa (CLI.ADDR.SIN_ADDR), Ntohs (Cli.addr.sin_port), recv_buf); if (send_bytes = Send (CLI.FD, Recv_buf, recv_bytes, 0)) < 0) {perror ("Fail to send Data");} printf ("Success to send%d bytes data to%s:%d\n%s\n", Recv_bytes, Inet_ntoa (CLI.ADDR.SIN_ADDR), Ntohs (cli.addr.sin_port ), recv_buf); return 0;} int main () {int sockfd, retval, CONNFD, I, maxfd; size_t Addr_len; struct sockaddr_in client_addr;fd_set fdset, Watchset; Clientpool cpool;addr_len = sizeof (struct sockaddr); Init_clientpool (&cpool); sockfd = Init_tcp_psock (SERV_PORT); Fd_zero (&fdset); Fd_set (SOCKFD, &fdset); maxfd = Sockfd;for (;;) {watchset = Fdset; The SELECT call returns will modify Fdsetretval = Select (maxfd+1, &watchset, NULL, NULL, and NULL); Two simultaneous connections, will not queue? if (retval < 0) {perror ("select Error"); continue;} Else{while (retval--) {if (Fd_isset (SOCKFD, &watchset)) {if (CONNFD = accept (sockfd, struct sockaddr*) &client_ addr, &addr_len)) = =-1) {perror ("Fail to accept the connection"); continue;} PrinTF ("Get a connetion from%s:%d\n", Inet_ntoa (CLIENT_ADDR.SIN_ADDR), Ntohs (Client_addr.sin_port)); Fd_set (CONNFD, &fdset); Add_clientinfo (&cpool, CONNFD, client_addr); if (Connfd > maxfd) maxfd = CONNFD; Mark}else {for (i = 0; i < Cpool.count; i++) {if (CPOOL.CINFO_SET[I].FD < 0)//markcontinue;if (Fd_isset (cpool.cinf O_SET[I].FD, &watchset) {if (PROCESS_CLI (Cpool.cinfo_set[i]) < 0) {printf ("%s:%d quit the connection\n", inet_ Ntoa (CPOOL.CINFO_SET[I].ADDR.SIN_ADDR), Ntohs (Cpool.cinfo_set[i].addr.sin_port)); FD_CLR (CPOOL.CINFO_SET[I].FD, &fdset); close (CPOOL.CINFO_SET[I].FD); cpool.count--;cpool.cinfo_set[i].fd =-1;}}}}}} return 0;}
Because select sets of descriptors are monitored each time it traverses, the efficiency is greatly affected when the set is large (with the linear increment of the online population being two or even three drops)
Epoll appears as an upgraded version of Select, supported by more than linux2.6. It takes an event-response approach, traversing only those descriptor collections that are joined to the ready queue by a kernel I/O event that wakes up asynchronously. This significantly reduces the system CPU utilization for a large number of connections and only a small number of active users.
The interface function of Epoll is simple, only three:
#include <sys/epoll.h>int epoll_create (int size), int epoll_ctl (int epfd, int op, intfd, struct epoll_event *event); int epoll_wait (int epfd, struct epoll_event *events, int maxevents, int timeout);
For details, please refer to:
Epoll works by the idea that if you want to do an I/O operation, the Epoll query is readable or writable, and if it is in a readable or writable state, Epoll calls the epoll_wait function for notification, and then further recv or send.
Epoll is only an asynchronous event notification mechanism, which itself does not perform any I/O read-write operations, it is only responsible for the notification is not readable or writable, and the specific read and write operations will be done by the application layer itself. This approach guarantees the independence of each other between event notifications and I/O operations.
Two modes of Epoll: ET (Edge Trigger) and LT (horizontal trigger)
With the ET mode, the kernel notifies only when the state changes, and the LT mode, similar to select, is always notified as long as there is an unhandled event kernel. Therefore, the ET pattern is designed to improve parallel efficiency by reducing system calls. On the other hand, the ET pattern requires a high level of programming and requires meticulous processing of each request, otherwise it is prone to event loss. For example: For ET, the accept call returns, in addition to establishing the current connection, not immediately epoll_wait, but also continue to loop accept, until return-1, and errno==eagain, do not continue to accept. Lt's performance on service writing is less demanding: as long as the data is not being fetched, the kernel is constantly notified, so there is no need to worry about event loss. If you call accept, there is a return to establish the connection immediately, and then call epoll_wait to wait for the next notification, similar to select.
An example of a server-side program:
#include "simon_socket.h" #include <fcntl.h> #include <sys/epoll.h> #define Serv_port 12345#define max_ EPOLLFD 100#define event_size 90int set_fd_nonblocking (int fd) {int flag;flag = Fcntl (fd, F_GETFL, 0); if (flag = =-1) {Perr or ("Fcntl error:"); return-1;} Flag |= o_nonblock;if (Fcntl (FD, F_SETFD, flag) = =-1) {perror ("Fcntl error:"); return-1;} return 0;} int main () {int I, LISTENFD, Contfd, EPFD, READYFD, curfd = 1, recv_bytes;struct sockaddr_in cli_addr;struct epoll_event Ev _tmp, events[event_size];size_t addr_len = sizeof (struct sockaddr); epfd = Epoll_create (MAX_EPOLLFD); listenfd = Init_tcp _psock (serv_port); ev_tmp.data.fd = Listenfd;ev_tmp.events = Epollin | Epollet;if (Epoll_ctl (EPFD, Epoll_ctl_add, LISTENFD, &ev_tmp) = =-1) {perror ("ADD event failed:"); return 1;} printf ("Epoll server startup at Port%5d\n", Serv_port) and while (1) {READYFD = Epoll_wait (EPFD, events, Event_size,-1); for (i = 0; i < READYFD; i++) {if (events[i].events & Epollerr) | | (Events[i].events & EPollhup) {perror ("Epoll error:"); close (events[i].data.fd); continue;} else if (events[i].data.fd = = LISTENFD) {if (Contfd = Accept (LISTENFD, (struct sockaddr *) &cli_addr, &addr_len)) = =-1) {perror ("Accept request failed:"); return 1;} elseprintf ("Get a connection from%s:%5d\n", Inet_ntoa (CLI_ADDR.SIN_ADDR), Ntohs (Cli_addr.sin_port)); if (Curfd > Event_size) {printf ("Too many connections, more than%d\n", event_size); continue;} Set_fd_nonblocking (CONTFD); ev_tmp.data.fd = Contfd;ev_tmp.events = Epollin | Epollet;epoll_ctl (EPFD, Epoll_ctl_add, CONTFD, &ev_tmp); curfd++;continue;} else if (events[i].events & Epollin) {if ((Recv_bytes = recv (EVENTS[I].DATA.FD, Recv_buf, max_buf_size, 0)) <= 0) {EP Oll_ctl (EPFD, Epoll_ctl_del, EVENTS[I].DATA.FD, NULL); Getpeername (EVENTS[I].DATA.FD, (struct sockaddr *) &cli_addr, &addr_len);p rintf ("%s:%5d quit the connection\n" , Inet_ntoa (CLI_ADDR.SIN_ADDR), Ntohs (Cli_addr.sin_port)); close (events[i].data.fd); curfd--;} else{ EV_TMP.DATA.FD = EVENTS[I].DATA.FD; ev_tmp.events = Epollout | Epollet; Epoll_ctl (EPFD, Epoll_ctl_mod, EVENTS[I].DATA.FD, &ev_tmp);}} else if (events[i].events & epollout) {Send (EVENTS[I].DATA.FD, recv_buf, recv_bytes, 0); ev_tmp.data.fd = Events[i]. data.fd;ev_tmp.events = Epollin | Epollet;epoll_ctl (EPFD, Epoll_ctl_mod, EVENTS[I].DATA.FD, &ev_tmp);}}} Close (LISTENFD); return 0;}