Why is event-driven server so hot?

Source: Internet
Author: User

Source: http://geogeo.github.com/blog/2012/12/31/node-dot-js-vs-tornado-vs-php/


Oppc model bottleneck

A traditional server model, such as Apache, generates a sub-process for each request. When a user connects to a sub-process on the server, a connection is generated and processed. Each connection obtains a separate thread and sub-process. When the user requests data to return, the child process starts to wait for the database operation to return. If another user also requests to return data at this time, blocking occurs.

This mode performs well in a very small workload. When the number of requests increases, the load on the server is too great. When apache reaches the maximum number of processes, all processes become slow. Each request has its own thread, if the serviceCodeWhen writing in PHP, each process requires a large amount of memory [1].


Fork () operation Delay

In fact, the oppc-based network is not as efficient as imagined. First, the performance of the new process depends largely on the Implementation of Fork () by the operating system. However, the processing of different operating systems is not ideal. Compare the latency of Fork () in each operating system.

The fork operation of the operating system is just a simple copy page ing. The dynamic link creates too many paging mappings for the elf (executable and linking format) section in the shared library and global offset table. Although the static link fork will significantly improve the performance, the latency is still not optimistic.


Process Scheduling

Linux interrupts a running process every 10 milliseconds (alpha is 1 millisecond, and this value is a compiled constant) to check whether other processes need to be switched for execution. The task scheduled by a process is to determine the next process to be executed, and the difficulty lies in how to allocate CPU resources fairly. A good SchedulingAlgorithmFair CPU resources should be shared to every process, and there should be no hunger process.

UNIX systems use multi-level feedback queue scheduling algorithms. Multiple ready Queues with different priorities are used, and heap is used to keep the queues sorted by priority. Linux 2.6 provides an O (1) scheduling algorithm to minimize the process scheduling latency. However, the frequency of process scheduling is 100Hz, which means that a process is aborted in 10 milliseconds and the system determines whether to switch to another process. If too many switches are involved, the CPU will be busy with switching, resulting in reduced throughput.

Creating a multi-process poses another problem: memory consumption.

Every created process occupies memory. In the test results of Linux 2.6, the fork () performance after about 400 connections exceeds the pthread_create () performance. After IBM has optimized Linux, a process can process 0.1 million connections. Fork () is too costly for each fork () connection. multithreading requires consideration of thread-safe and deadlock (deadlock) and Memory leakage.

Reliability

This model has reliability issues. An Improperly configured server is vulnerable to DoS attacks ). When a large number of concurrent request server resources are used up, the Server Load balancer configuration does not run out of the source and then the server will soon run out.

Synchronous blocking I/O

In this model, the applicationProgramExecute a system call, which will cause application blocking. This means that the application will be blocked until the system call is completed (data transmission is completed or an error occurs ). Calling an application is in a state that does not occupy the CPU, but simply waits for a response, but the process still occupies resources. When a large number of concurrent I/O Requests arrive, I/O blocking will occur, resulting in server bottlenecks.


Event-driven model server

The appeal analysis and experiment show that the operating system is not designed to handle server workloads. The traditional thread model is based on the needs of some intensive operations that run applications. The operating system is designed to allow users to execute multi-threaded programs so that background file writing and UI operations are performed simultaneously, rather than processing a large number of concurrent request connections.

Fork and multithreading are resource-intensive operations. To create a thread, You need to allocate a new memory stack. In addition, context switching is also an overhead. The CPU scheduling model is not suitable for a traditional Web server.

Therefore, the oppc model faces the memory consumption problem of multi-process and multi-thread delay. It is very complicated to use the oppc model to solve the c10k problem.

To solve the c10k problem, some new servers are displayed. The following are the Web servers that solve the c10k problem:

    • Nginx: an event-driven reverse proxy server for processing requests.
    • Cherokee: open source web server used by Twitter.
    • Tornado: A non-blocking Web Server framework implemented by python. This framework is used by Facebook's friendfeed module.
    • Node. JS: asynchronous, non-blocking web server that runs on Google V8 JavaScript Engine.


Obviously, the above servers that solve the c10k problem share the same characteristics: event-driven, asynchronous non-blocking technology.

Because network load work involves a lot of waiting. For example, the Apache server generates a large number of sub-processes and consumes a large amount of memory. However, most sub-processes occupy a large amount of memory resources but are waiting for a blocking task to end. Due to this feature, the new model abandons the idea of generating sub-processes for each request. All requests and transaction operations are managed by only one separate thread. This thread is called an event loop. The event loop asynchronously manages all user connections and file storage or database servers. When a request arrives, use poll or select to wake up the operating system and process the request accordingly. Solved many problems. In this way, the concurrent requests processed are no longer closely related to resource blocking. Of course, this also has some overhead, such as maintaining a list of always-open TCP connections, but the memory will not rapidly increase due to a large number of concurrent requests, because this list only occupies a small part of the memory stack. Node. js and nginx use this method to build super-large connections of applications. All operations are managed cyclically by one event, and multiple connections are handled well ).

Currently, the most popular event-driven asynchronous non-blocking I/O web server node. JS, saying it will be more efficient in memory usage, and because it is not a traditional oppc mode, there is no need to worry about deadlocks. Node. js does not directly execute I/O operations without functions, and thus does not cause blocking.

In the stress test of 1000 concurrent requests, we can see that the event-driven node. js and Tornado are faster than the Apache server of the traditional oppc model. Of course, the performance of node. js cannot be separated from the reason why it runs on Google V8. The average number of requests processed by the two event-driven model servers per second is twice that of the Apache server, and the memory is halved. Figure 2 shows that the event-driven model server occupies a higher CPU, which means that although the model runs in a single thread, it can use the CPU to process more concurrent requests more efficiently.

Limitations

Event loops cannot solve all problems [2]. Especially in node. JS, there are some defects. The most obvious omission of node. JS is the implementation of multiple threads. The event-driven technology seems to be implemented in multiple threads, such as most event-driven GUI frameworks. In theory, events should be independent from each other, so parallelization should not be difficult.

Although theoretically, some technical reasons make it difficult for node. js to implement multithreading. Node. js runs on Google's V8 JavaScript Engine 12. V8 is a high-performance JavaScript engine, but it is not designed as a multithreading. Because it was originally used by Google Chrome's JavaScript Engine, javascropt in the browser runs on a single thread. Therefore, it is very difficult to add multithreading. The underlying architecture is not designed for servers.


Future

With the development of reverse proxy servers such as nginx, Server Load balancer between independent running instances and node. the author of JS proposed that the best way to solve the multithreading defect is to use Fork sub-processes and use Server Load balancer to handle Concurrent Server tasks. Such a solution seems to cover up its implementation flaws. However, the event-driven model advocates that a logic server should be able to achieve optimal performance under a single-core CPU and occupy less memory. In contrast, Apache initially aims to fully and efficiently manage concurrency and threads at the cost of all available resources. The event-driven model server avoids this tedious design and implements highly scalable servers in the most concise and efficient way.

A single thread is also in line with the computing unit of the cloud computing platform. Obviously, a single cloud instance is ideal for running a single node. js server and using Server Load balancer for horizontal scaling.


The emergence of the event-driven model aims to solve the mismatch between traditional servers and network workloads, achieve highly scalable servers, and reduce memory overhead. The thing-driven model changes the connection method to the server. All connections are managed cyclically by events. Each connection triggers an event running in the event loop process, instead of generating a new OS thread for each connection, and allocate some supporting memory for it. Therefore, you do not need to worry about deadlocks and do not directly call blocked resources. Instead, you can use Asynchronous methods to implement non-blocking I/O. The event-driven model enables more concurrent requests on servers with the same configuration to implement scalable servers.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.