First, the problems encountered
When we deploy a Web application with an IIS server, when many users have high concurrent access, the client responds very slowly and the customer experience is poor, because when IIS accepts a client request, it creates a thread that consumes large memory when the thread reaches thousands of. At the same time, because these threads are switching, the CPU usage is also high, which makes IIS more difficult to perform. So how do we solve this problem?
Second, how to solve high concurrency problems
To solve this high concurrency problem, we need to load balance. We can use the hardware and software to solve the load balance on the architecture, the hardware level can be used load balancer, in general, the hardware load balancer in the function, performance than the Software method, but the cost is expensive, common hardware load balancer has F5,A10 and other brands, these hardware load balancing in large companies are commonly used, On the other hand, we want to load balance from software level, common Lvs,ngnix load balancer server.
Third, what is Ngnix?
Nginx is a high-performance HTTP and reverse proxy server, but also a IMAP/POP3/SMTP proxy server, then what is a reverse proxy server? The server segment accepts the client's request, then distributes the request to the specific server for processing, and then sends its server's response feedback to the client. For example, when the user enters in the address bar: www.baidu.com, this time the browser will build a request message request sent to the Nginx server, and then nginx send all the requests to our IIS server, the IIS server processing results sent to the Nginx,nginx server to send the final results to the customer -End browser. The proxy server is the intermediate entity of the network, and the proxy server is both a Web server and a Web client. Therefore, it leads to a noun: Forward proxy server, to obtain content from the original server, the client sends a request to the proxy server, and specifies the IP and port of the original server, then the proxy server requests and obtains the content from the original server expert, and feeds the result back to the client. So the client needs to be set up to use the forward proxy. is shown.
Iv. Advantages of Nginx:
Cross-platform: Linux,unix, which also has a ported version of Windows, is certainly the best to deploy on Linux, but we can use its ported version on Windows.
Simple Configuration exception
Non-blocking, high concurrent connections, official testing can support 50,000 concurrency,
event-driven: The communication mechanism uses Epoll, when the event is not ready, it is put into the queue, ready to be processed.
Low memory consumption: handles large concurrent requests with less memory consumption, and 10 processes consumes 150M of memory under 3wan concurrent connections.
Bandwidth Savings: gzip compression supported.
High stability
v. Ngnix How to handle a request?
when Nginx starts, it parses the files we configured, gets the ports and IP addresses that are listening, The master process initializes the socket communication for the build, and then Fork,master calls the fork function to create a new process, the new process created by the fork is called the subprocess, and the workers compete to accept the new connection. At this point the client can initiate a connection to Nginx, when the client and Nginx three handshake, and Nginx established a connection, at this time a child process worker will connect successfully, and then get this established connection socket, and then nginx to the connection package, read and write processing, Finally, Nginx actively closes the connection.
Nginx is managed through a connection pool when implemented, and each worker process has a separate connection pool. The size of the connection pool is worker_connections. The connection pool here is actually not a real connection, it's just an array of ngx_connection_t structures with a worker_connections size. And, Nginx will save all the idle ngx_connection_t through a linked list free_connections, each time gets a connection, it gets one from the idle link list, runs out, then puts back into the idle link list.
The
uses Nginx load balancing to build high performance. Netweb Application One