Nginx's Fast Propagation and concurrent processing advantages

Source: Internet
Author: User

Nginx is a lightweight HTTP server compiled by Russians. Nginx, Which is pronounced as "engine X". It is a high-performance HTTP and reverse proxy server and also an IMAP/POP3/SMTP proxy server. nginx was developed by the Russian site Igor Sysoev, which is the second-most visited site in Russia. It has been running on this site for more than two and a half years. Igor Sysoev uses the BSD-based license when creating a project.

Nginx is written in event-driven mode, so it has excellent performance and is also a very efficient reverse proxy and load balancing. It has the performance matching Lighttpd, and there is no Lighttpd Memory leakage problem, and the mod_proxy of Lighttpd also has some problems and has not been updated for a long time. Luhai environmental protection, Tongcheng County

As an HTTP server, nginx has the following basic features:

    • Process static files, index files, and automatic indexes, and enable file descriptor buffering.
    • Non-Cache reverse proxy acceleration, simple load balancing and fault tolerance.
    • FastCGI, simple load balancing and fault tolerance.
    • Modular structure. Filters include gzipping, byte ranges, chunked responses, and SSI-filter. If FastCGI or another Proxy Server Processes multiple SSI in a single page, the processing can run in parallel without waiting for each other.
    • Supports SSL and tlssni.

Nginx is designed for performance optimization. performance is the most important consideration, and efficiency is very important in implementation. It supports the kernel poll model and can withstand the high load test. The report shows that it supports up to 50,000 concurrent connections.

Nginx is spreading at the speed of light. It rapidly expands the market with its stability, high performance, and many other advantages. As we all know, nginx is based on a single thread, so how can he gain advantages in concurrency? Will the main thread be blocked due to network congestion? The following is a conceptual explanation of related issues.

The root cause of the problem is that people do not have enough knowledge about the processing performance of computers, as well as the simplified processing of common server architectures. People who have worked on large and mature servers may know that, solving a system bottleneck is more important than optimizing 1000 algorithms. This is the barrel effect. The amount of water that a bucket can hold depends on the shortest board, the reason why we use a method to connect a thread or even block a thread in a general server application software is not that this method is the best, and designers do not have a better method, this is because this routine is the simplest, which is easy to understand in terms of concept and operation, and has a strong fault tolerance. However, for servers with extremely high performance requirements, such as DNS or Server Load balancer, high processing speed and high concurrency are required. This simple thread pool and connection pool approach cannot solve the problem, such as an index page request, it will contain dozens of ancillary resource files. If the cilent network is slow, the dozens of connections will be blocked for a long time, and the users will not be able to stand it if they have more than one server, because the thread overhead is very large, if it cannot be quickly released, it will bring disastrous to the server. For public network services, this will be especially obvious afterwards. Obviously, it is silly to let the server pay for the client's network speed.

So since multithreading has such a problem, how can a single thread escape? The key to solving the problem lies in asynchronous I/O. in windows, there is iocp (the complete port. for a large number of I/O packages, internal implementation uses the CPU number of threads for event processing, he will notify you that the specified asynchronous read/write has been completed), and there is an epool on Linux (a pure Event Notification interface, which will notify you that you can read or write ), it is easy to simplify all requests into blocking and non-blocking operations. All requests that need to be blocked are triggered by the epool to trigger the corresponding event. Non-blocking (processing takes a short time) partially run with the main thread until the blocked part is stopped, and the blocked part listens for the asynchronous completion event, which constitutes the event-driven model.

It is easy to confuse people here because many people think that function processing will block the main thread. In fact, it is still the barrel effect mentioned above. Is it the shortest Plank, this is determined by tests and experience. The fact is that the processing time is very short, and the 1 million for loop may be shorter than the time when the LAN passes through a network access, it is easy to understand this. If your server can process 10 thousand requests per second, it can process functional functions (such as parsing protocols, operations, and outputs) at most, it will take 0.1-0.3 seconds. The rest of the time is time-consuming on network congestion and time-consuming on the event. In this case, what is the significance of separating operations from multiple threads? Not to mention the public network. Io blocking, such as the network, is the main factor affecting the server. It is the short board.

The network I/O, iocp, epool, and other event notification mechanisms solve this problem. The performance is congested, so it is better to directly accept and so on, however, the performance is better when the network latency is very serious, because they can handle a large number of connections without compromising the performance. If the value is directly blocked and the connection can process 1000, epool and so on can process 3-5 thousand at the same time, so the actual application value is much greater.

The rest is the process after the event occurs. As described in the previous article, we will not repeat it here. nginx, Lighttpd, and so on are all developed based on this type of model, if you are interested, you can study its code.

Nginx's Fast Propagation and concurrent processing advantages

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.