Linux Server network development model

Source: Internet
Author: User
Tags epoll socket

Why is the performance of nginx much higher than Apache?

This is mainly because Nginx uses the latest epoll (Linux 2.6 kernel) and Kqueue (FreeBSD) network I/O models, while Apache uses the traditional select model. I've seen an example of this in a blog post:

Suppose you read in college, waiting for a friend to visit, and this friend only knows you're in building a, but I don't know where you live, so you have an appointment to meet at the door of building a. If you're using a blocking IO model to deal with this problem, you'll have to wait for a friend to arrive at the gate A. In this time you can not do other things, it is not difficult to know, the efficiency of this approach is low. Now that the times have changed, the multiplex IO model has been used to deal with this problem. You tell your friend to go to the building in building a, and let her tell you how to get there. The building management Lady here is the role of multiplexing IO.

Explains how the Select and Epoll models work:

Select version of the Aunt do is the following things: such as classmate a friend came, select version of the aunt is more stupid, she took a friend room to query who is the classmate A, you wait for friends to come. If every arrival of a friend building tube aunt to the whole building query students, then the efficiency of processing must be low, and soon after the floor there are a lot of people.

Epoll version of the mother is more advanced, she wrote down the classmate a information, such as his room number, so the classmate a friend arrival, just tell the friend classmate a in which room can, do not have to personally with people full of the building to find someone. Epoll can be used without chuihuizhili to locate her classmate A. The difference between the Epoll and the Select model is quite clear at a glance.

In the Linux kernel, the fd_set used by the Select is limited, that is, the kernel has a parameter __fd_setsize defines the number of handles per fd_set, in the kernel source code/usr/include/linux/posix_types.h

#undef __fd_setsize

#define __FD_SETSIZE 1024

Select is not possible if you want to detect the readable or writable state of 1025 handles at the same time. Implementing a select in the kernel uses the polling method, where each detection traverses the handle of all fd_set, and it is obvious that the more time the Select function executes and the more handles the FD detects, the more time-consuming it will be.

Epoll is a way of multiplexing Io (I/O multiplexing) that is used only for linux2.6 above cores. The Epoll model it supports is the maximum number of open files, this number is generally far greater than 2048, for example, in the 1GB memory machine about 100,000 or so, please see: Cat/proc/sys/fs/file-max, This number is very much related to system memory.

Another Achilles heel of traditional select/poll is when you have a large socket set, but due to network latency, only a portion of the socket is "active" at any one time, but select/poll each call will scan the entire set linearly. resulting in a linear decline in efficiency. But Epoll does not have this problem, it will only operate on the "active" socket---this is because in the kernel implementation Epoll is based on the callback function above each FD implementation. Then, only "active" socket will be active to invoke the callback function, the other idle state socket will not, in this case, Epoll implemented a "pseudo" AIO, because this time the driving force in the OS kernel. In some benchmark, if all the sockets are basically active---such as a high-speed LAN environment, Epoll is no more efficient than select/poll, on the contrary, if you use too much epoll_ctl, there is a slight decrease in efficiency. But once you use idle connections to simulate a WAN environment, epoll is far more efficient than select/poll.

Epoll has two modes of operation: Edge triggered (ET), level triggered (LT)

LT (level triggered) is the default mode of operation and supports both block and No-block sockets. In this practice, the kernel tells you if a file descriptor is ready, and then you can perform IO operations on the Ready FD. If you don't do anything, the kernel will continue to notify you, so this pattern is less likely to be programmed incorrectly. Traditional Select/poll are representative of this model.

ET (edge-triggered) is a high-speed mode of operation that supports only no-block sockets. In this mode, when the descriptor is never ready to be ready, the kernel tells you through Epoll. Then it assumes you know that the file descriptor is ready and no more ready notifications are sent for that file descriptor until you do something that causes that file descriptor to be no longer in the ready state (for example, if you're sending, receiving, or receiving requests, Or a ewouldblock error is caused by sending less than a certain amount of data received.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.