The first is the basic concept of IO multiplexing:
The select,poll,epoll is a mechanism for IO multiplexing. I/O multiplexing is a mechanism by which multiple descriptors can be monitored, and once a descriptor is ready (usually read-ready or write-ready), the program can be notified of the appropriate read and write operations. but select,poll,epoll are essentially synchronous I/O, because they all need to read and write when the read-write event is ready, that is, the read-write process is blocked , and asynchronous I/O does not have to be responsible for reading and writing, asynchronous i/ The implementation of O is responsible for copying the data from the kernel to the user space.
The key is to understand the relationship and differences between blocking non-blocking, synchronous asynchrony, and then to understand the common IO multiplexing methods.
Epoll is a unique IO multiplexing technology for Linux and a newer approach, and the main advantage of the traditional select/poll approach is that it can be understood as follows:
A few of the major drawbacks of select:
(1) Each call to select, the FD collection needs to be copied from the user state to the kernel state, the cost of FD is very large
(2) At the same time, each call to select requires a kernel traversal of all the FD passed in, which is also very expensive when FD is very large
(3) The number of file descriptors supported by Select is too small, the default is 1024
Poll and select are basically similar, except that the data structures exchanged differ.
Features of Epoll:
Since Epoll is an improvement on select and poll, the above three drawbacks should be avoided. Epoll provides three functions, Epoll_create,epoll_ctl and epoll_wait,epoll_create are creating a epoll handle; Epoll_ctl is the type of event registered to listen; Epoll_ Wait is waiting for the event to occur.
For the first drawback, the Epoll solution is in the Epoll_ctl function. Each time a new event is registered in the Epoll handle (specifying Epoll_ctl_add in Epoll_ctl), all FD is copied into the kernel instead of being duplicated at epoll_wait. Epoll guarantees that each FD will be copied only once throughout the process.
For the second disadvantage, the solution for Epoll is not to add current to the FD-corresponding device-waiting queue each time, like Select or poll, but to hang the current only once at Epoll_ctl (which is necessary) and to specify a callback function for each FD. This callback function is invoked when the device is ready to wake the waiting queue, and the callback function will add the ready FD to a ready list. Epoll_wait's job is actually to see if there is a ready-to-use FD (using Schedule_timeout () to sleep for a while, judging the effect of a meeting, and the 7th step in the Select implementation is similar).
For the third disadvantage, Epoll does not have this limit, it supports the maximum number of FD can open file, this number is generally far greater than 2048, for example, in 1GB memory of the machine is about 100,000, the exact number can cat/proc/sys/fs/ File-max, in general, this number and system memory relationship is very large.
Summarize:
(1) The Select,poll implementation requires itself to constantly poll all FD collections until the device is ready, during which time the sleep and wake-up cycles may be repeated. While Epoll actually needs to call Epoll_wait to constantly poll the ready linked list, there may be multiple sleep and wake alternates, but when it is device ready, call the callback function, put the ready FD into the Ready list, and wake the process into sleep in epoll_wait. While both sleep and alternate, select and poll traverse the entire FD collection while "Awake", while Epoll is "awake" as long as it is OK to determine if the ready list is empty, which saves a lot of CPU time. This is the performance boost that the callback mechanism brings.
(2) Select,poll each call to the FD set from the user state to the kernel state copy once, and to the device to wait for the queue to hang once, and epoll as long as a copy, and the current to wait for the queue is hung only once (in Epoll_ At the start of wait, note that the wait queue here is not the device waiting queue, just a epoll internally defined wait queue. This can also save a lot of overhead.
Through the further understanding of Epoll, I personally feel that to understand epoll the key is to understand the following points:
1. Comparison with select and so on. Mainly from the efficiency, methods, monitoring the number of aspects need to have a more comprehensive understanding
2. About the three functions of epoll need to understand usage
3. For epoll_event This data structure needs a deep understanding, especially the epoll_data data, in which the Union can save many types of user data, which provides a way to callback functions, etc.
Finally, a little bit of code is used to prompt the basic use of Epoll:
1 intMain ()2 {3 intI, Maxi, LISTENFD, NEW_FD, Sockfd,epfd,nfds;4 ssize_t N;5 CharLine[maxline];6 socklen_t Clilen;7 structEpoll_event ev,events[ -];//EV is used to register events, and arrays are used to return events to be processed8 structsockaddr_in clientaddr, serveraddr;9LISTENFD = socket (af_inet, Sock_stream,0);//Generate socket File descriptorTenSetnonblocking (LISTENFD);//set socket to non-blocking mode OneEpfd=epoll_create ( the);//generates a epoll dedicated file descriptor for handling the Accept AEV.DATA.FD=LISTENFD;//set the file descriptor associated with the event to be processed -ev.events=epollin| Epollet;//set the type of event to be processed -Epoll_ctl (Epfd,epoll_ctl_add,listenfd,&ev);//Registering Epoll Events the //setting server-side address information -Bzero (&SERVERADDR,sizeof(SERVERADDR)); -serveraddr.sin_family =af_inet; - Char*local_addr=local_addr; +Inet_aton (local_addr,&(SERVERADDR.SIN_ADDR)); -serveraddr.sin_port=htons (serv_port); +Bind (LISTENFD, (SOCKADDR *) &serveraddr,sizeof(SERVERADDR));//binding a socket connection AListen (LISTENFD, Listenq);//Monitor atMaxi =0; - for ( ; ; ) - { - /*epoll_wait: Waits for the Epoll event to occur and puts the occurrence of the SOKCT FD and event type into the events array; - * Nfds: The number of events that occurred. - * Note: in */ -Nfds=epoll_wait (Epfd,events, -, -); to //handle all events that occur + for(i=0; i<nfds;++i) - { the if(EVENTS[I].DATA.FD==LISTENFD)//The event happened on LISTENFD. * { $ /*Gets the event port information, which is stored in clientaddr;Panax Notoginseng *NEW_FD: The new socket descriptor returned, which is used to perform recv/send operations on the event*/ -NEW_FD = Accept (LISTENFD, (structSOCKADDR *) &clientaddr, &Clilen); the if(new_fd<0) + { APerror ("new_fd<0"); theExit1); + } - setnonblocking (NEW_FD); $ Char*str =Inet_ntoa (clientaddr.sin_addr); $EV.DATA.FD=NEW_FD;//set the file descriptor for the read operation -ev.events=epollin| Epollet;//to set the read operation event for the injection test -Epoll_ctl (EPFD,,, &ev);//Registered EV the } - Else if(events[i].events&Epollin)Wuyi { the if((SOCKFD = EVENTS[I].DATA.FD) <0) - Continue; Wu if(n = read (SOCKFD, line, MAXLINE)) <0) - { About if(errno = =econnreset) $ { - Close (SOCKFD); -EVENTS[I].DATA.FD =-1; - } A Else +std::cout<<"ReadLine Error"<<Std::endl; the } - Else if(n = =0) $ { the Close (SOCKFD); theEVENTS[I].DATA.FD =-1; the } theEV.DATA.FD=SOCKFD;//set file descriptors for write operations -Ev.events=epollout| Epollet;//set up write events for the injection test inEpoll_ctl (Epfd,,sockfd,&ev);//Modify the event to be handled on SOCKFD as Epollout the } the Else if(events[i].events&epollout) About { theSOCKFD =EVENTS[I].DATA.FD; the Write (sockfd, line, n); theEV.DATA.FD=SOCKFD;//set the file descriptor for the read operation +ev.events=epollin| Epollet;//to set the read operation event for the injection test -Epoll_ctl (Epfd,,sockfd,&ev);//Modify the event to be handled on SOCKFD as Epolin the }Bayi } the } the}
Reference: http://blog.51cto.com/7666425/1261446
Http://www.cnblogs.com/Anker/p/3265058.html
Epoll Knowledge Point memo of Linux network programming