A simple comparison between nginx concurrency model and traffic_server concurrency model

Source: Internet
Author: User

Transferred from http://www.cnblogs.com/liushaodong/archive/2013/02/26/2933535.html

Nginx Concurrency Model:
The Nginx process model takes the prefork approach, and the number of pre-configured worker subprocess is specified by the configuration file, which defaults to 1, not exceeding 1024. The master main process creates a listener socket interface, fork the child process, the worker process listens to the client connection, and each worker subprocess tries to accept the connected sleeve interface alone, the accept is locked and configurable, and the default is locked, if the operating system supports the atomic integral type, The use of shared memory to achieve the atomic lock, otherwise use the file lock. When a lock is not used, when multiple processes are accept at the same time, when a connection comes, multiple processes are simultaneously aroused, which can lead to a panic group problem. When the lock is used, only one worker is blocked on the accept, and the other processes are blocked from acquiring the lock, which solves the problem of the surprise group. The master process sends commands to the worker subprocess by Socketpair, and the terminal can also send commands to master, which communicate with the master process by sending a signal, and the worker communicates with the UNIX socket interface.

The model is shown below:

Traffic server concurrency Model: Multithreading asynchronous event handling
Traffic_cop and Traffic_manager as the management process, the worker process is responsible for the listen and accept for traffic_server,traffic_server, and to improve performance, Traffic_server uses the asynchronous I /O and multithreading technology. Instead of creating a thread for each connection, the traffic server creates a set of configurable worker threads in advance, with separate asynchronous event handlers running on each worker thread. Traffic_server creates several sets of thread and dispatches the event by type to the appropriate thread's event queue, which completes the state migration by executing the callback function in the continuation corresponding to the event. Migration from the initial state to the end state represents the execution of the entire event, and thread is never withdrawn, waiting for the next event to come. The model is shown below:

Both servers involve network I/O operations, Nginx uses the Linux epoll model, traffic server uses asynchronous I/O, and this does not go into depth because I am not familiar with I/O. Use the pre-allocation mechanism to nginx the pre-assign worker process and traffic the server to pre-assign worker threads.

Comparative analysis:

1. Degree of concurrency:

Nginx uses a multiprogramming model that uses multiple processes to listen for new connections, so that it does not need to be distributed, but locks are introduced to keep the overhead of distribution, but the cost of locking is introduced. In fact, the cost of the lock is relatively small, if the system supports atomic shaping, then the cost of the lock will be further reduced, through the worker process of their own initiative to accept, this will allow each worker process load more balanced, which relies on the system to check the process of fair scheduling. The worker process can also be configured to use threads for event handling, but the model is still multiprogramming because the thread needs to fetch the event from the process. This design has a flaw, the number of worker processes can not be too much, nginx designed the maximum number of worker processes is 1024, too many worker processes will lead to competitive resources between processes, so that the system runs slowly. Visible Nginx is designed to be a lightweight Web server, but as a lightweight Web server, it has excellent performance and reports that it can support up to 50,000 concurrent connections. Nginx is the first choice for a lightweight web reverse proxy server.

The traffic server uses a multithreaded asynchronous event-handling model that combines multithreaded technology with asynchronous event-handling techniques to improve concurrency and performance. multithreaded programs can take full advantage of modern processor multi-core processing capabilities, so that a process of multiple tasks can be executed in parallel, improve the efficiency of program execution. This multi-threaded design also makes the performance of traffic server in direct proportion to the performance of the processor, and does not constrain the number of worker processes as Nginx receives. It is not designed as a lightweight Web server, it supports clustering, and the result of the Apache traffic server v.3.0.0 Benchmark is that it can handle more than 200,000 requests per second, and that traffic server is a worthy Web server. Personally think this server will be the future development trend.

2. Response Time

Nginx uses the worker process to handle customer requests in a way that is accept and handled directly, so the nginx response should be fast and individuals feel that they should be faster than traffic server servers.

Traffic server is the server process that accepts requests and then distributes them to the request queue, which is processed by the processing thread, so the theoretical response time is not nginx fast.

The test documents on the network indicate that when the static Web page is requested, the response time is about 5:7, (the data on the net, I did not test), and the analysis is consistent.

3. Stability

Nginx because many of the worker processes that work independently are working, when one or some of the worker processes are abnormal, it does not affect the normal functioning of the system. The master process detects the signal in the loop body after initializing the worker process, and then processes the signal, because the master process is always stable and when it discovers a worker process exception, it restarts the worker process, and other worker processes are checked. It can be seen that the design fault tolerance of nginx is very high.

In traffic server, because server is a worker process and there is only one, the system is guaranteed to function properly. Adding the TRAFFIC_COP process and the Traffic_manager process to this system to manage the server process adds double protection to the server process, and when the server is abnormal, the traffic_manager process restarts it in time. And it's a good design to continue to use the Traffic_manager process to accept customer requests during the reboot process. When the Traffic_manager process and the sever process are terminated at the same time, the TRAFFIC_COP will restart them, so the stability of the system is also very high.

Due to the inability to continue service during the traffic server reboot, the nginx is better than traffic server for stability.


4. Peak response

The Web server is concerned about the server's sound when there is a large number of connection requests over a short period of time.

Nginx because it is a number of worker processes work, a large number of requests when the workload of each worker is more evenly distributed, when the peak is too high, will cause the worker process scheduling delay, will also block customer requests, stating that the server's processing power will block customer requests, There is a constraint relationship between them. In addition, Nginx adopts a phased resource allocation technique, which is handled by the cache manager process and the cache loader process, and this design also makes Nginx not easily run out of memory when it encounters a peak request, and it is not easy to cause the system to end abnormally under overload requests.

When traffic server uses the server process to receive a large number of user requests, it is sent to the request queue, and then the thread is processed with a large amount of data, easy to run out of memory, and an exception that causes the server process to exit, Although the process will be restarted quickly (the official document says the reboot will take only a few seconds), and the manager process continues to accept the request during the reboot, the system cannot process the request during that time, but Nginx does not stop processing the request. (Traffic server does not have a cluster)

The analysis shows that the response ability of nginx is better than traffic server in the short time explosive access. The overload requests here are for their different requests, such as the nginx processing capacity of 50000 connections, then its peak is 50000, and their response is only for their peak requests. But the traffic server supports clustering, which will greatly improve its peak response capabilities.

5. Safety

Nginx does not do much for security, a malicious connection attack can cause the worker process to run out of system resources and stop responding, although the master process detects and attempts to reboot, and if the master process responds, the system needs to reboot. However, Nginx uses the phased resource allocation technology, the system is difficult to exhaust all resources but to respond, so the general Dos attack on the nginx is ineffective. Overall, security is general.

Traffic server does a lot of security work, and the system periodically calls the Heartbeat_manager function and the Heartbeat_server function to check the health of the manager process and the server process. Abnormal processes that immediately restart the response when they are found to be not normal. And once the server process has ended abnormally, it will soon be restarted. Therefore, when the attack is received, traffic server will respond more timely, the system security is relatively high.

Summary: The individual believes that Nginx is a model of multiprogramming, and traffic server is a model of multithreaded asynchronous event processing with its advantages and disadvantages. Nginx is an excellent lightweight Web server that works well with a few requests, and traffic server is a high-performance Web server that supports clustering and is better suited for handling a large number of request connections.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.