Using Nginx to build a reverse proxy server in Linux

Source: Internet
Author: User
Tags epoll time limit wrapper nginx reverse proxy

One, reverse proxy: Web server's "broker"

1.1 Reverse Proxy First Impressions

A reverse proxy (Reverse proxy) means to accept a connection request on the Internet with a proxy server , and then forward the request to a server on the internal network. and returns the results from the server to the client on the Internet requesting a connection, at which point the proxy server behaves as a server .
As can be seen from the above figure: the reverse proxy server is located in the Web room , the proxy Web server receives the HTTP request and forwards the request.

1.2 Effect of the reverse agent

protects Web site security: Any requests from the Internet must be preceded by a proxy server;

speeds up Web requests by configuring caching: You can cache certain static resources on real Web servers and reduce load pressures on real Web servers;

realizes load Balancing: serves as load Balancing server distributes requests evenly, and balances load pressure of each server in the cluster;
Second, the first knowledge Nginx: simple but not ordinary
2.1 Nginx is a god horse?
Nginx is a lightweight Web server, a reverse agent, and an e-mail proxy server. It publishes the source code in the form of a BSD-like license, which is known for its stability, rich feature sets, sample configuration files, and low system resource consumption.
The code is as follows Copy Code
Source: Nginx (pronounced with engine X) was developed by the Russian programmer Igor Sysoev . It was originally used by large Russian portals and search engine Rambler(Russian: Рамблер). This software is distributed under the Bsd-like protocol and can be run in operating systems such as UNIX, Gnu/linux, BSD, Mac OS X, Solaris, and Microsoft Windows.
When it comes to Web servers, Apache servers and IIS servers are the two giants, but faster and more flexible opponents: Nginx is catching up.

Application Status of 2.2 Nginx

Nginx has been running for 3 years on Rambler Media(, Russia's largest portal, while more than 20% of Russia's virtual hosting platforms use Nginx as a reverse proxy server.

In China, there are already Taobao, Sina blog, Sina Podcast, NetEase News, six rooms,, discuz!, water wood community, watercress, Yupoo, home, Thunder online and other Web sites using Nginx as a Web server or reverse proxy server.

The core features of 2.3 nginx

(1) cross-platform:Nginx can be compiled on most Unix like OS, and also has a ported version of Windows;

(2) The configuration is extremely simple: very easy to get started. Configuration style As with program development, God's general configuration;
(3) Non-blocking , high concurrent connections: The first phase of disk I/O is non-blocking when data is replicated. The official test can support 50,000 concurrent connections, in the actual production environment to run to 2~3 million concurrent connection number. (This benefits from the Nginx use of the latest Epoll model);
  code is as follows copy code
PS: for a Web server, first look at the basic process of a request: Establish a connection-receive data-send data, at the bottom of the system: the above process (establish connection-receive data-send data) at the bottom of the system is read and write events .
① If a blocking call is used, the read and write events will not be able to read and write events when they are not ready, so long to wait until events are ready to read and write events, and the request is delayed.
② Since it is not ready to block calls, the non-blocking call method is used. Non-blocking is: The event returns immediately, telling you that the incident is not ready, you panic, and then come again. Well, after a while, check out the event until the event is ready, during which time you can do something else and then look at the event. Although not blocked, but you have to come over to check the status of the event, you can do more things, but the resulting cost is not small.
(4) event-driven: The communication mechanism adopts the Epoll model to support a larger concurrent connection.
The code is as follows Copy Code
① non-blocking determines whether to read by constantly checking the state of the event Write operations, which can be costly, so there is an asynchronous non-blocking event handling mechanism . This mechanism allows you to monitor multiple events at the same time, calling them to be blocked, but can set the timeout time, within the timeout period, if there are events ready to return. This mechanism solves two problems of blocking and non-blocking calls above. The
② takes the Epoll model as an example: When the event is not ready, put it in the Epoll (queue). If an event is ready, then deal with it, and if the event returns Eagain, continue to put it in the epoll. Thus, as soon as the event is ready, we will deal with it and wait in Epoll only if all events are not ready. In this way, we can handle a lot of concurrency concurrently, of course, the concurrent request here, refers to the unhandled request, the thread has only one, so at the same time can handle the request of course only one, but in the request to constantly switch, but also because the asynchronous event is not ready, and the initiative to let go. There is no cost to switching here, and you can understand that there are many prepared events to be recycled, and this is actually the case.
③ This event handling has great advantages over multithreading, do not need to create threads , each request consumes very little memory, no context switch , event handling is not Often lightweight, concurrent number will not cause unnecessary waste of resources (context switching). For IIS servers, each request will monopolize a worker thread, and when the concurrency count is thousands of, thousands of of the threads are processing the request. This is a big challenge for the operating system: because the memory footprint of the thread is very large, the CPU overhead of the thread's context switch is very high, and the natural performance is not up, which leads to a severe performance degradation in the highly concurrent scenario.
Summary: enables high concurrency and lightweight by using asynchronous non-blocking event-handling mechanisms that enable Nginx to handle multiple prepared events by a process loop.
(5) master/worker structure : A Master process that generates one or more worker processes.
  code is as follows copy code
PS: The core idea of the Master-worker design pattern is to logically parallel the original serial and split the logic into many separate modules for parallel execution. It consists of two main components master and Worker,master mainly divide the logic into separate parts, while maintaining the worker queue, sending each independent part to several worker to execute in parallel, and the worker mainly carries out the actual logical calculation, and returns the result to master.
Q: nginx What is the benefit of adopting this process model?
A: take a separate process that will not affect each other, after one process exits, the other processes are still working, the service is not interrupted, and the master process restarts the new worker process quickly. Of course, the abnormal exit of the worker process must be a bug in the program, and an exception exit will cause all requests on the current worker to fail, but will not affect all requests, thus reducing the risk.
(6) Small memory consumption: processing large concurrent request memory consumption is very small. Under 30,000 concurrent connections, the 10 nginx processes that are opened consume 150M of memory (15m*10=150m).
(7) Built-in health Check function: If a WEB server on the back end of the Nginx agent is down, it will not affect front-end access.
(8) bandwidth savings: support GZIP compression, you can add the browser local cache header header.
(9) high Stability: for reverse proxy, the probability of downtime is negligible.

Third, the construction of actual combat: Nginx+iis build the load balance of Web server cluster

Here we mainly in the Windows environment, by deploying the same Web site to the different servers of IIS, and then through a unified nginx response proxy Server to provide unified access, to achieve a most simplified reverse proxy and load balancing services. However, limited by the experimental conditions , we are here mainly on a computer reverse proxy, IIS cluster simulation, the specific experimental environment as shown in the following image: We are nginx services and Web sites are deployed on a computer, Nginx listening to HTTP80 ports, Instead, Web sites are deployed on the same IIS server with different port numbers (8050 and 8060), and when a user accesses localhost, nginx, as a reverse proxy, transfers the request evenly to the two Web applications in IIS with different ports. Although the experimental environment is simple and limited, this article can be achieved and demonstrated for a simple load-balancing effect.

3.1 Prepare a web site to deploy to the IIS server cluster

(1) Create a new Web application in VS, but in order to display the effect on a single computer, we copy the Web program and modify the default.aspx of two web programs so that their home page displays a different piece of information. Here Web1 shows the "the" the "the" the "the", and Web2 shows "the Second web."

(2) debugging run, to see the effect of two sites?
①web1 's display effect:
②WEB2 's display effect:

③ deployed to IIS, assigning different port numbers: I chose web1:8050,web2:8060 here.

(3) Summary: in a real-world environment, the implementation of building a Web application server cluster is to deploy the same Web application to multiple Web servers in a Web server cluster .

3.2 Download Nginx and deploy to the server as a self-initiated Windows service

(1) to the Nginx website download nginx version of Windows: (Here we use the nginx/windows-1.4.7 version of the experiment, this article at the bottom of the download address)

(2) Extract to the disk any directory, for example, here I unzipped: d:serversnginx-1.4.7
(3) Start, stop, and Reload services: Start Nginx.exe with cmd: Start nginx.exe, stop service: nginx-s stop, reload configuration: nginx-s re Load;
(4) Every time the cmd start Nginx service does not meet the actual requirements, so we think of it as a Windows service, and set to Automatic startup mode. Here, we use a good applet: "Windows Service wrapper", the Nginx.exe registration as a Windows service, the specific steps are as follows:
① Download the latest version of the Windows Service wrapper program, such as my download name is "Winsw-1.8-bin.exe" (at the bottom of this article has a download address), and then name it as you want to name (such as: "Nginx-service.exe", of course , you can also not rename)
② Copy the renamed Nginx-service.exe to the Nginx installation directory (for example, I'm Here "D:serversnginx-1.4.7″")
③ Create a Windows Service wrapper XML configuration file in the same directory, the name must be the same as the name used when the first step is renamed (for example, I am here "nginx-service.xml", if you don't rename it, it should be " Winsw-1.8-bin.xml "), the contents of this XML are as follows:
<?xml version= "1.0" encoding= "UTF-8"?>
<name> Nginx service</name>
<description>high performance Nginx service</description>
< Executable>d:serversnginx-1.4.7nginx.exe</executable>
<logpath>d:serversnginx-1.4.7</ logpath>
<startargument >-p d:serversnginx-1.4.7</startargument>
<stopargument>-p d:serversnginx-1.4.7-s stop</ Stopargument>
④ execute the following command at the command line to register it as a Windows service: nginx-service.exe Install
⑤ can then see the Nginx service in the Windows Services list, where we can set it to start automatically:
(5) Summary: in a Windows environment, Windows services to be provided externally typically have their startup type set to Automatic .

3.3 Modify Nginx Core profile nginx.conf

(1) Number of processes and maximum connections per process:

nginx Number of processes, the recommended setting is equal to the total CPU core
• The maximum number of connections per process, the maximum number of connections to the server = number of connections * processes
(2) Basic configuration of Nginx:
• Listening ports are generally HTTP ports: 80;
• Domain names can have multiple, separated by spaces: For example server_name;
(3) Load balance list basic configuration:
location/{}: Load balancing request for ASPX suffixes, if we want to load balance all the aspx suffix files: location ~. *.aspx$ {}
Proxy_pass: Request to the custom server list, where we turn the request to a list of load-balancing servers identified as;
• In the configuration of the load Balancing server list, weight is the weight, can be defined according to the machine configuration weight (if a server's hardware configuration is very good, can handle more requests, then you can set a relatively high weight, and one of the server's hardware configuration is relatively poor, Then you can configure the weight of the previous one to be weight=2, and then a poor configuration of weight=1. The Weigth parameter indicates the weight value, the higher the weight is, the greater the probability of being assigned;
(4) Summary: The most basic Nginx configuration is almost the above content, of course, is only the most basic configuration . (Detailed configuration content please download the bottom of the nginx-1.4.7 detailed view)

3.4 Add nginx cache configuration for static files

In order to improve the response speed and reduce the load of the real server, we can cache the static resources in the reverse proxy server, which is also an important role of the reverse proxy server.

(1) Cache the static resources of the picture file
Root/nginx-1.4.7/staticresources/image: The jpg/png files mentioned in the configuration are set to/nginx-1.4.7/staticresources/ Image folder to find a match and return the file;
Expires 7d: Expired time limit of 7 days, the static file is not updated, the expiration time can be set up a little, if frequently updated, you can set smaller;
TIPS: The following style, script cache configuration like here, just the location of the folder is not the same, no longer repeat.
(2) caching a static resource's style file
(3) Cached script files for static resources
(4) Create a static resource folder in the Nginx Service folder, and copy the static file to the cache: Here I mainly use the Web program image, CSS and JS files copied into;
(5) Summary: by configuring the cache settings for static files, requests for these static files can be returned directly from the reverse proxy server without having to forward these static resource requests to a specific Web server for processing, which can increase the response speed, Reduce load pressure on real Web servers .

3.5 Simple test Nginx reverse proxy to achieve load balancing effect

(1) Processing responses from to return results The first time you access http://localhost/Default.aspx

(2) Processing responses from to return results when the second visit http://localhost/Default.aspx
(3) Multiple access to http://localhost/Default.aspx when the screenshot:

Learning Summary

In this article, with the help of Nginx, this artifact simply constructs a reverse proxy service in the Windows environment and simulates the load balancing effect of an IIS server cluster. From this demo, we can simply feel the reverse agent for us to do, and realize how load balance is the same thing. However, in most applications today, Nginx is deployed to Linux servers, and there are some optimized configurations for load balancing, where all we do is just a little bit of use (just modify the configuration file). However, the high buildings on the ground, the early small experience, will also help us to the later in-depth study to lay a little foundation.

Suddenly in the QQ space to see a friend sent gifts, suddenly found today incredibly is my Gregorian birthday, well, I wish my own birthday happy, I hope that in the days ahead can do more practice, share more content. Of course, if you think this article also can, that also trouble point a praise, do not mean your left mouse button yo.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.