Socket for Linux network programming (13): Introduction to Epoll series functions, differences with SELECT and poll __web

Source: Internet
Author: User
Tags epoll int size readline connection reset

http://blog.csdn.net/simba888888/article/details/9075719


Introduction of Epoll Series functions

#include <sys/epoll.h>
int epoll_create (int size);
int epoll_create1 (int flags);
int epoll_ctl (int epfd, int op, int fd, struct epoll_event *event);
int epoll_wait (int epfd, struct epoll_event *events, int maxevents, int timeout);


* Epoll_create (2) creates a Epoll instance and returns a file descriptor referring to that instance. (The more recent
Epoll_create1 (2) extends the functionality of Epoll_create (2).)


* Interest in particular file descriptors is then registered via EPOLL_CTL (2). The set of file descriptors currently
Registered on a Epoll instance is sometimes called a epoll set.


* EPOLL_WAIT (2) waits for I/O events, blocking the calling thread if no events are currently.


1. Epoll_create1 produces a Epoll instance that returns the handle to the instance. Flag can be set to 0 or epoll_cloexec, the 0 o'clock function behaves the same as Epoll_create, and the EPOLL_CLOEXEC flag is similar to the O_CLOEXEC flag on open, which closes the file descriptor when the process is replaced.

2, Epoll_ctl:

(1) Epfd:epoll instance handle;

(2) OP: The operation of the document descriptor FD, mainly Epoll_ctl_add, Epoll_ctl_del, etc.

(3) FD: The target file descriptor required to operate;

(4) Event: Structural body pointer

typedef Union EPOLL_DATA {
void *ptr;
int FD;
uint32_t u32;
uint64_t U64;
} epoll_data_t;

struct Epoll_event {
uint32_t events; * Epoll Events * *
epoll_data_t data; /* USER Data variable * *
};

Events parameters mainly include Epollin, Epollout, Epollet, epolllt, etc. general data community we set its member FD, that is, the third parameter of the EPOLL_CTL function.

3, Epoll_wait:

(1) Epfd:epoll instance handle;

(2) Events: Structural body pointers

(3) Maxevents: Maximum number of events

(4) Timeout: timeout, set to-1 means never time out


Here we use C + + to implement a server-side program:

C + + Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21st
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
#include <unistd.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <signal.h>
#include <fcntl.h>
#include <sys/wait.h>
#include <sys/epoll.h>

#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
#include <string.h>

#include <vector>
#include <algorithm>

#include "read_write.h"
#include "sysutil.h"

typedef std::vector< struct epoll_event> eventlist;

/* The biggest advantage over select and Poll,epoll is that it will not reduce efficiency with the increase in the number of FD concerns.
int main (void)
{
int count = 0;
int LISTENFD;
if ((LISTENFD = socket (pf_inet, Sock_stream, ipproto_tcp)) < 0)
Err_exit ("socket");

struct sockaddr_in servaddr;
memset (&servaddr, 0, sizeof (SERVADDR));
servaddr.sin_family = af_inet;
Servaddr.sin_port = htons (5188);
SERVADDR.SIN_ADDR.S_ADDR = htonl (Inaddr_any);

int on = 1;
if (setsockopt (LISTENFD, Sol_socket, so_reuseaddr, &on, sizeof (ON)) < 0)
Err_exit ("setsockopt");

if (Bind (LISTENFD, (struct sockaddr *) &servaddr, sizeof (SERVADDR)) < 0)
Err_exit ("bind");
if (Listen (LISTENFD, Somaxconn) < 0)
Err_exit ("Listen");

std::vector< int> clients;
int EPOLLFD;
EPOLLFD = Epoll_create1 (epoll_cloexec); Epoll instance Handle

struct Epoll_event event;
EVENT.DATA.FD = LISTENFD;
event.events = Epollin |  Epollet; Edge Trigger
Epoll_ctl (EPOLLFD, Epoll_ctl_add, LISTENFD, &event);

EventList events (16);
struct sockaddr_in peeraddr;
Socklen_t Peerlen;
int conn;
int i;

int nready;
while (1)
{
Nready = epoll_wait (EPOLLFD, &*events.begin (), static_cast< int> (Events.size ()),-1);
if (Nready = = 1)
{
if (errno = = eintr)
Continue

Err_exit ("epoll_wait");
}
if (nready = 0)
Continue

if ((size_t) Nready = = Events.size ())
Events.resize (Events.size () * 2);

for (i = 0; i < Nready; i++)
{
if (events[i].data.fd = = LISTENFD)
{
Peerlen = sizeof (PEERADDR);
conn = Accept (LISTENFD, (struct sockaddr *) &peeraddr, &peerlen);
if (conn = = 1)
Err_exit ("accept");

printf ("ip=%s port=%d\n", Inet_ntoa (PEERADDR.SIN_ADDR), Ntohs (Peeraddr.sin_port));
printf ("Count =%d\n", ++count);
Clients.push_back (conn);

Activate_nonblock (conn);

EVENT.DATA.FD = conn;
event.events = Epollin | Epollet;
Epoll_ctl (EPOLLFD, EPOLL_CTL_ADD, Conn, &event);
}
else if (events[i].events & Epollin)
{
conn = EVENTS[I].DATA.FD;
IF (Conn < 0)
Continue

Char recvbuf[1024] = {0};
int ret = ReadLine (conn, recvbuf, 1024);
if (ret = 1)
Err_exit ("ReadLine");
if (ret = 0)
{
printf ("Client close\n");
Close (conn);

event = Events[i];
Epoll_ctl (EPOLLFD, Epoll_ctl_del, Conn, &event);
Clients.erase (Std::remove (Clients.begin (), Clients.end (), conn), Clients.end ());
}

Fputs (Recvbuf, stdout);
Writen (conn, Recvbuf, strlen (RECVBUF));
}

}
}

return 0;
}

Define a new type EventList at the very beginning of the program, containing a container for the struct epoll_event structure.

Then the following socket,bind,listen are the same as before, not to mention. Then use Epoll_create1 to create a Epoll instance, and then look at the following four lines of code:

struct Epoll_event event;
EVENT.DATA.FD = LISTENFD;
event.events = Epollin | Epollet; Edge Trigger
Epoll_ctl (EPOLLFD, Epoll_ctl_add, LISTENFD, &event);

According to the previous function analysis, these four meanings are to add the listening socket LISTENFD to the concerned socket sequence.

The second argument in the Epoll_wait function, in fact events.begin () is an iterator, but its implementation is also struct epoll_event* type, although &*events.begin () is also struct epoll _event*, but cannot use Events.begin () directly as a parameter, because the type does not match, the compilation will be wrong.

EventList events (16); That is, initialize the container size of 16, when the number of events returned nready already equals 16 o'clock, you need to increase the size of the container, using the Events.resize function, the container can be dynamically increased, which is one of the reasons we use C + + implementation.

When a listening socket has a readable event, the Conn returned by accept also needs to add it to the concerned socket queue using the EPOLL_CTL function.

You also need to call Activate_nonblock (conn); Set Conn to non-blocking, man 7 Epoll has this sentence:

An application that employs the Epollet flag should with nonblocking file descriptors to avoid has a blocking read or
Write starve a task is handling multiple file descriptors.

When the next loop returns a connected socket has a read event, read the data, if read return 0 means that the other side off, need to use the EPOLL_CTL function to remove conn from the queue, we use std::vector<int> clients; To save the conn every time accept returns, so now you need to erase it and call the Clients.erase () function.


We can use the Conntest client program written earlier to test, first run the server program, and then run the client, the output is as follows:

simba@ubuntu:~/documents/code/linux_programming/unp/socket$./echoser_epoll

................................

Count = 1015
ip=127.0.0.1 port=60492
Count = 1016
ip=127.0.0.1 port=60493
Count = 1017
ip=127.0.0.1 port=60494
Count = 1018
ip=127.0.0.1 port=60495
Count = 1019
Accept:too Many open files


simba@ubuntu:~/documents/code/linux_programming/unp/socket$./conntest

.........................................................

Count = 1015
ip=127.0.0.1 port=60492
Count = 1016
ip=127.0.0.1 port=60493
Count = 1017
ip=127.0.0.1 port=60494
Count = 1018
ip=127.0.0.1 port=60495
Count = 1019
Connect:connection Reset by peer


Why is the server-side count only 1019, because minus 012, a listening socket also has a epoll instance handle, so 1024-5 = 1019.

Why is the client's error tip different from here? This shows that epoll processing efficiency is higher than poll and select, because processing quickly, a connection is accept one, when the server end accept 1019th connection, again accept will be due to the total number of file descriptors exceeded the limit, print error prompts, At this point, although the client has created the 1020th sock, but in the connect process found that the peer has exited, so print error prompts, the connection was reset by the peer. If the server side is slow to handle, then the client will connect successfully 1021 connections, and then create a 1022th sock error, print error prompts: Socket:too Many open files, of course, because of file descriptor restrictions, The server side can only successfully accept 1019 connections from the completed connection queue.


Two, Epoll and select, poll difference

1, the biggest advantage over select and Poll,epoll is that it will not reduce the efficiency as the number of monitoring FD increases. The implementation of select and poll in the kernel is handled by polling, and the more the number of polled FD, the more natural it takes.
2, the implementation of the Epoll is based on callback, if FD has the expected event occurs by the callback function to join the Epoll ready queue, that is, it only concerned about "active" FD, and the number of FD Independent.
3, kernel/user space memory copy problem, how to let the core of the FD message to the user space. Select/poll has taken a memory copy method on this issue. Epoll, however, uses a shared memory approach.
4, Epoll not only tells the application to have the I/0 event arrival, but also tells the application related information, this information is the application fills, therefore can direct to the event according to these information application, does not have to traverse the entire FD collection.


The difference of Epoll epolllt (level trigger, default) and Epollet (Edge trigger) mode

1, Epolllt: Completely rely on kernel Epoll drive, the application only needs to deal with FDS returned from Epoll_wait, these FDS we think they are in a ready state. At this point Epoll can be thought of as a faster poll.

2. Epollet: In this mode, the system only notifies the application which FDS become ready, and once the FD becomes ready, Epoll will no longer be concerned about any state information of the FD (removed from the epoll queue) until the application triggers the Eagain state through read-write (non-blocking) , epoll that this FD has become idle again, then epoll again pay attention to this FD state

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.