Apache is asynchronous blocking processing requests; Nginx is asynchronous non-blocking. Their specific differences to see what this person explains. http://blog.csdn.net/xifeijian/article/details/17385831. That's very detailed.
The high concurrency of Nginx is due to its epoll model, which differs from the traditional server program architecture, and Epoll is a Linux kernel 2.6. Below compare Apache and nginx working principle.
Traditional Apache is multi-process or multi-threaded to work, assuming that the multi-process work (prefork), Apache will be a few processes, similar to the process pool work, but the process pool here will increase as the number of requests increases. For each connection, Apache is processed within a process. Specifically, recv (), and the disk I/O based on the URI to find the file, and send () are blocked. In fact, all is apche for socket I/O, read or write, but read or write is blocked, blocking means that the process has to suspend into sleep state, then once the number of connections, Apache must generate more processes to respond to requests, once the process is more, CPU switching to the process is frequent, resource-intensive and time-consuming, so it leads to the performance of Apache degraded, white is to deal with so many processes. In fact, think about it, if the process is not blocking each request, then the efficiency will certainly improve a lot.
Nginx adopts Epoll model, asynchronous non-blocking. For Nginx, the processing of a complete connection request is divided into events, one for each event. such as accept (), recv (), Disk i/o,send (), and so on, each part has the corresponding module to handle, a complete request may be handled by hundreds of modules. The real core is the event collection and distribution module, which is the core of all the management modules. Only the core module is dispatched to allow the corresponding module to consume CPU resources, thus processing the request. take an HTTP request, first in the Event collection Distribution Module register the listener interested in the event, after registration does not block the direct return, then do not need to pipe, wait for the connection to the kernel will notify you (Epoll polling will tell the process), the CPU can handle other things. Once a request comes in, then the entire request is assigned the appropriate context (in fact pre-allocated), the time to register a new event of interest (read function), the same client data to the kernel will automatically notify the process to read the data, read the data is parsed, after parsing the disk to find resources (I/O), Once I/O is done notifying the process, the process starts sending back the data send () to the client, which is not blocked at this time, so the kernel sends back a notification after the call is sent. The whole down to a request into a number of stages, each stage to many modules to register, and then processing, are asynchronous non-blocking. asynchronous here refers to do a thing, do not need to wait to return the results, done will automatically notify you.
Features of Select/epoll
The feature of the
Select: Select Selects a handle by traversing all handles, that is, when the handle has an event response, select needs to traverse all the handles to get to which handles have event notifications, so the efficiency is very low. However, if there are few connections, there is little difference in performance compared to the LT trigger mode of Select and Epoll.
to say more, select supports the number of handles is limited, and only support 1024, this is the handle set limit, if more than this limit, it is likely to cause overflow, and very difficult to find the problem, of course, you can modify the Linux socket kernel to adjust this parameter.
Epoll Features: Epoll for handle event selection is not traversed, is the event response, is the handle on the event to be immediately selected, do not need to traverse the entire handle chain list, so the efficiency is very high, the kernel will handle with red black tree. The
for Epoll is also the difference between ET and LT, which is the horizontal trigger, et is the edge trigger, and the difference between performance and code implementation is also very large.
The Epoll model is mainly responsible for processing the requests of a large number of concurrent users, and accomplishing the data interaction between the server and the client. The specific implementation steps are as follows:
(a) Use the Epoll_create () function to create a file description that sets the maximum number of socket descriptors that will be manageable.
(b) To create a receive thread associated with Epoll, the application can create multiple receive threads to handle read notification events on Epoll, and the number of threads depends on the specific needs of the program.
(c) Create a listen socket descriptor Listensock, set the descriptor to non-blocking mode, call the Listen () function to listen on the socket for any new connection requests, set the event type Epollin to be processed in the epoll_event structure, and work as Epoll_et to increase productivity while using EPOLL_CTL () to register events and finally start the network monitoring thread.
(d) The network monitoring thread starts the loop and epoll_wait () waits for the Epoll event to occur.
(e) If the Epoll event indicates that there is a new connection request, call the Accept () function, add the user socket descriptor to the Epoll_data union, set the descriptor as non-blocking, and set the event type to be processed in the epoll_event structure to read and write, The way of working is epoll_et.
(f) If the Epoll event indicates that data is readable on the socket descriptor, the socket descriptor is added to the readable queue, the receiving thread is notified to read the data, and the received data is placed into the linked list of the received data, and after being logically processed, the feedback packet is placed in the Sending Data link table. Waits to be sent by the sending thread.
Epoll's operation is as simple as a total of 4 api:epoll_create, Epoll_ctl, epoll_wait and close.
We can give a simple example to illustrate the Apache workflow, we usually go to the restaurant to eat. Restaurant working mode is a waiter full service customers, the process is so, the waiter waiting for guests at the door (listen), the guests to the reception arranged on the table (accept), waiting for the customer to order (request URI), to the kitchen to call the master to cook a single dish (disk I/O), Wait for the kitchen to be ready (read), then serve the Guest (send), the whole down waiter (process) in many places is blocked. With more guests (HTTP requests), the restaurant can only be serviced by calling more waiters (the fork process), but because the restaurant resource is limited (CPU), once the waiter is too much of a management cost (CPU context switch), this enters a bottleneck.
Let's see how the Nginx is going to handle it. The door of the restaurant hangs a doorbell (registered Epoll model of the Listen), once a guest (HTTP request) arrives, send a waiter to receive (accept), then the waiter went to busy other things (such as to receive guests), waiting for the guest to order a good meal called the waiter (data to read ()) , the waiter came to take the menu to the kitchen (disk I/O), the waiter and other things to go, and so the kitchen is ready to call the waiter (disk I/O end), the waiter to the guests to serve (send ()), the kitchen to do a dish on the guest, the middle waiter can go to do other things. The whole process is cut into many stages, each with corresponding service modules. Let's think about it, the restaurant can entertain more people once there are more guests.
Whether it is nginx or squid, the reverse proxy, its network mode is event-driven. Event-driven is actually a very old technique, as was the case with the early select and poll. Later, more advanced event mechanisms based on kernel notifications, such as the Epoll in Libevent, have led to improved event-driven performance. The essence of event-driven is the IO event, where applications quickly switch between multiple IO handles to implement so-called asynchronous IO. Event-driven server, the best thing to do is this IO-intensive work, such as reverse proxy, it between the client and the Web server a data transfer function, purely IO operation, itself does not involve complex computing. reverse proxy with event-driven to do, obviously better, a worker process can run, no process, thread management overhead, CPU, memory consumption is small.
So nginx, squid are doing so. Of course, Nginx can also be a multi-process + event-driven mode, a few processes run libevent, do not need Apache as much as hundreds of of the process number. Nginx processing static files is also good, because the static file itself is a disk IO operation, the same process. As for the number of concurrent connections, this is meaningless. Writing a network program can handle tens of thousands of of concurrency, but if the majority of clients are stuck there, there is no value.
Look at Apache or resin applications such as server, they are called application servers, because they really want to run specific business applications, such as scientific computing, graphic images, database reading and writing. They are probably CPU-intensive services, and event-driven is not appropriate. For example, a computation takes 2 seconds, so 2 seconds is completely blocked and nothing is useless. Think about MySQL. If you change to an event driver, a large join or sort will block all clients. At this time the multi-process or the thread manifests the superiority, each process each individual matter, does not block and interferes. Of course, the modern CPUs are getting faster, and the time for a single computational blockage can be small, but as long as there is blocking, event programming has no advantage. So the process, threading technology, does not disappear, but with the event mechanism to complement, long-term existence.
In general, event-driven is suitable for IO-intensive services, where multiple processes or threads are suitable for CPU-intensive services, each with their own advantages, and there is no inclination to replace who.
The difference between Apache and Nginx