JAVA uses Epoll as a way to process NiO __java

Source: Internet
Author: User

JDK 6.0 and JDK 5.0 Update 9 of NIO support Epoll ( Linux only ), and there is a significant performance boost for concurrent idle connection, which is what many network server applications need.

The following methods are enabled:

 
 

For example, if Tomcat is running under Linux using NIO Connector, enabling Epoll is helpful for performance improvements.

The way Tomcat would enable this option is to join the following line at the beginning of catalina.sh

catalina_opts= '-djava.nio.channels.spi.selectorprovider=sun.nio.ch.epollselectorprovider '

Epoll is an improved poll of the Linux kernel for handling large batch handles. To use epoll you only need these three system calls: Epoll_create (2), Epoll_ctl (2), epoll_wait (2). It is introduced in the 2.5.44 kernel (epoll (4) is a new API introduced in Linux kernel 2.5.44) and is widely used in the 2.6 kernel, such as lighthttpd.

The advantages of a Epoll-epoll

Supports a process to open a large number of socket descriptors (FD)

Select the most intolerable is a process opened by the FD is a certain limit, set by Fd_setsize, the default value is 2048. It's obviously too little for IM servers with tens of thousands of connections that need support. The first time you can choose to modify the macro and recompile the kernel, but the data also points to a decrease in network efficiency and the choice of multiple-process solutions (the traditional Apache solution), but although the cost of creating a process on Linux is relatively small, it is still noticeable, Plus, data synchronization between processes is far less efficient than synchronization between threads, so it's not a perfect solution. However, Epoll does not have this restriction, it supports the maximum number of FD can open file, this number is generally far greater than 2048, for example, in 1GB memory machine about 100,000, the number can be cat/proc/sys/fs/file-max to see, In general, this number is very much related to system memory. IO efficiency does not linearly decrease as the number of FD increases

Another Achilles heel of traditional select/poll is when you have a large socket set, but due to network latency, only a portion of the socket is "active" at any one time, but select/poll each call will scan the entire set linearly. resulting in a linear decline in efficiency. But Epoll does not have this problem, it will only operate on the "active" socket---this is because in the kernel implementation Epoll is based on the callback function above each FD implementation. Then, only "active" socket will be active to invoke the callback function, the other idle state socket will not, in this case, Epoll implemented a "pseudo" AIO, because this time the driving force in the OS kernel. In some benchmark, if all the sockets are basically active---such as a high-speed LAN environment, Epoll is no more efficient than select/poll, on the contrary, if you use too much epoll_ctl, there is a slight decrease in efficiency. But once you use idle connections to simulate a WAN environment, epoll is far more efficient than select/poll. Use MMAP to accelerate message delivery between the kernel and user space.

This actually involves the concrete realization of the epoll. Whether it is select,poll or epoll need the kernel to notify users of the FD message, how to avoid unnecessary memory copies is very important, in this case, epoll through the kernel in the user space mmap the same memory. And if you want me to focus on epoll from the 2.5 kernel, I will not forget to hand mmap this step. Kernel Fine tuning

This is not really a epoll advantage, but the advantages of the entire Linux platform. Maybe you can doubt the Linux platform, but you can't avoid the Linux platform giving you the ability to fine-tune the kernel. For example, the kernel TCP/IP protocol stack uses the memory pool to manage the sk_buff structure, so the size of the memory pool (Skb_head_pool) can be dynamically adjusted during runtime---through the echo xxxx>/proc/sys/net/core/hot_ List_length complete. For example, the 2nd parameter of the Listen function (TCP completes the packet queue length of 3 handshake) can also be adjusted dynamically according to your platform memory size. Even more, try the latest NAPI network card driver architecture on a special system with a large number of packets and a small size for each packet itself.

The use of two Epoll-epoll

Happily, the 2.6 kernel epoll is much simpler than its 2.5 development version of/dev/epoll, so, in most cases, powerful things are often simple. The only trouble is Epoll has 2 ways of working: LT and ET.

LT (level triggered) is the default mode of operation and supports both block and No-block sockets. In this practice, the kernel tells you whether a file descriptor is ready, and then you can io the ready fd. If you don't do anything, the kernel will continue to notify you, so this pattern is less likely to be programmed incorrectly. Traditional Select/poll are representative of this model.

ET (edge-triggered) is a high-speed mode of operation that supports only no-block sockets. In this mode, when the descriptor is never ready to be ready, the kernel tells you through Epoll. Then it assumes you know that the file descriptor is ready and no more ready notifications are sent for that file descriptor until you do something that causes that file descriptor to be no longer in the ready state (for example, if you're sending, receiving, or receiving requests, Or a ewouldblock error is caused by sending less than a certain amount of data received. Note, however, that the kernel will not send more notifications (only once) if this FD is not being used for IO operations (which would cause it to become not ready again), but in the TCP protocol, the Acceleration utility of the ET mode still requires more benchmark confirmation.

Epoll only epoll_create,epoll_ctl,epoll_wait 3 system calls,
Please refer to http://www.xmailserver.org/linux-patches/nio-improve.html for specific usage
There is a complete example in http://www.kegel.com/rn/.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.