Struct timeval {long TV _sec; // number of seconds long TV _usec; // Number of microseconds}
Usage Details UNP Third Edition
Difference between epoll and select/poll:
1. support a process to open a large number of socket Descriptors (FD)
Select
The most intolerable thing is that the FD opened by a process has certain limitations, which are set by FD_SETSIZE. The default value is 2048. For IM servers that need to support tens of thousands of connections, there are obviously too few. At this time, you can choose to modify this macro and then re-compile the kernel, but the materials also pointed out that this will bring about a reduction in network efficiency, and 2. You can choose a multi-process solution (traditional
Apache solution). However, although the cost of creating a process on linux is relatively small, it can still be ignored. In addition, data synchronization between processes is far less efficient than inter-thread synchronization, so it is not a perfect solution. However
Epoll does not have this limit. The FD limit supported by epoll is the maximum number of files that can be opened. This number is generally greater than 2048. For example, the number of machines with 1 GB of memory is about 0.1 million.
/Proc/sys/fs/file-max. Generally, this number has a great relationship with the system memory.
2. IO efficiency does not decrease linearly as the number of FD increases
Another critical weakness of traditional select/poll is that when you have a large set of sockets, but due to network latency, only some of the sockets at any time are "active, however, each select/poll call will linearly scan all sets, resulting in a linear decline in efficiency. However, epoll does not have this problem. It only operates on "active" sockets-this is because epoll is implemented based on the callback function on each fd in kernel implementation. Then, only the "active" socket will actively call
Callback Function, other idle status socket will not, in this regard, epoll implements a "pseudo" AIO, because at this time the driver is in the OS kernel. In some
In benchmark, if all the sockets are basically active-for example, in a high-speed LAN environment, epoll is not more efficient than select/poll. On the contrary, if epoll_ctl is used too much, the efficiency is also slightly lower. However, once idle is used
Connections simulates the WAN environment, and epoll is far more efficient than select/poll.
3. Use mmap to accelerate message transmission between the kernel and user space.
This actually involves the specific implementation of epoll. Both select, poll, and epoll require the kernel to notify users of FD messages. It is important to avoid unnecessary memory copies, epoll is implemented through the same memory of the user space mmap kernel. If you want me to focus on epoll from the 2.5 kernel, you will not forget to manually
Mmap.
4. kernel fine-tuning
This is not an advantage of epoll, but an advantage of the entire linux platform. Maybe you can doubt the linux platform, but you cannot avoid the linux platform giving you the ability to fine-tune the kernel. For example, if the Kernel TCP/IP protocol stack uses a memory pool to manage the sk_buff structure, you can dynamically adjust the size of this memory pool (skb_head_pool) during runtime ---
Echo
XXXX>/proc/sys/net/core/hot_list_length is complete. For example, the listen function's 2nd parameters (TCP completes the length of the packet queue after three handshakes) can also be dynamically adjusted based on the memory size of your platform. Even in a special system with a large number of data packets but the size of each data packet itself is small, try the latest napi nic driver architecture.