Use Nginx load balancing to build high performance. Netweb Application A

Source: Internet
Author: User
Tags nginx server nginx load balancing
first, the problems encountered

When we deployed a Web application with an IIS server, when many users have high concurrent access, the client response is slow and the customer experience is poor, and as IIS accepts the client request, it creates a thread that, when the thread reaches thousands of, consumes a larger amount of memory. Also, because these threads are switching, the CPU footprint is higher, so IIS performance is difficult to improve. So how to solve this problem.


Ii. How to solve the problem of high concurrency

To solve this high concurrency problem, we need to load balance. We can solve load balancing with hardware and software on the architecture. The hardware level can use load balancer, in general, the hardware load balance in the function, performance is superior to the software, but expensive, common hardware load balancing has F5,A10 and other brands, these hardware load balancing in large companies are commonly used, On the other hand, we have to load balance from the software level, commonly used Lvs,ngnix load balancing server.


Three, Ngnix is what.

        Nginx is a high-performance http and reverse proxy server, but also a IMAP/POP3/SMTP proxy server, then what is a reverse proxy server? Accept the client's request in the server segment, then distribute the request to the specific server processing, and then send the response feedback results of its server to the client, For example, when the user enters in the address bar: www.baidu.com, the browser will build a request message request sent to the Nginx server, and then nginx send all the requests to our IIS server, the IIS server processing the results sent to the Nginx,nginx server to send the final results to the customer End browser. The proxy server is the intermediate entity of the network, and the proxy server is both a Web server and a Web client. So it leads to a noun: Forward proxy server, forward proxy server is to get content from the original server, the client sends a request to the proxy server, and specifies the IP and port of the original server, then the proxy server requests and obtains the content from the original server expert, and feeds the result to the client. Therefore, the client needs to set up to use the forward proxy. The following illustration shows.


Four, Nginx advantages:
Cross-platform: Linux,unix, also has a ported version of Windows, certainly the best deployment on Linux, but we can use its ported version on Windows.
Configuration is unusually simple
Non-blocking, high concurrency connection, official test can support 50,000 concurrent,
Event-driven: communication mechanism using Epoll, when the event is not ready, put into the queue, ready to deal with.
Master/worker structure: A master process to manage multiple worker processes, similar to the SOM,SOC structure of ARCIG, which, when Ngnix is started, will be based on the information we configure, and in general, I set the machine CPU to start its woker process, Each worker is a peer-to-peer relationship, that is, they are able to handle requests from the client, so this involves a lock problem, and we can restart Nginx after the configuration file has been modified directly without pausing the system because master receives the command reload The configuration file is reloaded, the new process is started, and all the old worker is told to quit after all the requests have been processed. In addition, we can realize that the model of a bit, is when a woker problem exits, will not cause the system can not be used, others can still be used normally.
Less memory consumption: processing large concurrent request memory consumption, in 3wan concurrent connection, 10 processes to consume 150M memory.
Built-in monitoring check function: When a Web server in the background downtime (hang), does not affect front-end access. It is based on the backend server feedback status code (500,404 and so on to judge)
Bandwidth savings: Enables gzip compression.

High stability


v. Ngnix How to handle a request.
      When Nginx starts, it parses our configured files, gets the ports and IP addresses that are listening, and the master process initializes the socket communications for this build and then fork Master calls the fork function to create a new process, a new process created by Fork is called a subprocess, and then the worker competes to accept a new connection, at which point the client can initiate a connection to Nginx, when the client shakes hands with the Nginx three times, After establishing a connection with Nginx, a child process worker will connect successfully, then get the established connection socket, then nginx the connection to the package, read and write processing, and finally, Nginx actively shut down the connection.
      Nginx, when implemented, is managed through a pool of connections, each of which has a separate connection pool, and the size of the connection pool is worker_connections. The connection pool here is actually not a real connection, it's just an array of worker_connections size of a ngx_connection_t structure. And, Nginx will save all the free ngx_connection_t through a list free_connections, each time gets a connection, obtains one from the free connection list, after using, then puts back to the free connection list inside.   This value represents the maximum number of connections that can be made by each worker process, so a nginx can be established with a maximum of worker_connections * worker_processes. Because the reverse server takes up 2 connections, the maximum concurrent quantity should be worker_connections * WORKER_PROCESSES/2.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.