Reverse Proxy Server (forwarding)

Source: Internet
Author: User
Tags website server nginx server nginx reverse proxy
Reverse proxy and nginx Example 1 concept of reverse proxy

The reverse proxy method is to use a proxy server to accept connection requests from the Internet, and then forward the requests to the server on the internal network, return the result obtained from the server to the Client Requesting connection from the Internet. The proxy server is displayed as a server.

Generally, the proxy server is only used to proxy internal network connection requests to external internet networks. The client must specify the proxy server, and send the HTTP request to the Web server directly to the proxy server. External network connection requests to the internal network are not supported, because the internal network is invisible to the external network. When a proxy server can proxy hosts on the external network and access the internal network, this proxy service is called reverse proxy service. In this case, the proxy server acts as a Web server, and the external network can simply treat it as a standard Web server without specific configuration. The difference is that this server does not store the real data of any web pages, and all static web pages or CGI programs are stored on internal web servers. Therefore, the attack on the reverse proxy server does not damage the webpage information, which enhances the security of the Web server.

Reverse Proxy is commonly referred to as Web Server acceleration, it is a technology that reduces the load of the actual Web server by adding a high-speed Web buffer server between the busy Web server and the external network. Reverse Proxy improves the acceleration function for Web servers as a proxy cache. It is not for browser users, but for one or more specific Web servers, it can proxy access requests from external networks to internal networks.

The proxy server forces the access from the external network to the server to which the proxy is sent to it. In this way, the reverse proxy server is responsible for receiving client requests and then obtaining content from the source server, return the content to the user and save the content to the local device so that the user can receive the same request in the future. The local cache will directly send the content to the user, to reduce the pressure on the backend web server and improve the response speed.

2. Reverse Proxy Server and Content Server

The proxy server acts as a proxy for the server. If your content server has sensitive information that must be kept secure, such as the credit card number database, you can set a proxy server outside the firewall as a proxy for the content server. When an external client attempts to access the content server, it will send it to the proxy server. The actual content is on the content server and is protected by security inside the firewall. The proxy server is located outside the firewall and looks like a content server to the client.

When the client sends a request to the site, the request is forwarded to the proxy server. Then, the proxy server sends client requests to the content server through a specific channel in the firewall. The content server then returns the result to the proxy server through this channel. The proxy server sends the retrieved information to the client, as if the proxy server is the actual content server. If the content server returns an error message, the proxy server first intercepts the message, changes any URL listed in the header, and then sends the message to the client. This prevents external clients from getting the Redirection URL of the internal content server.

In this way, the proxy server provides another barrier between the security database and possible malicious attacks. Source: www.176book.com. In contrast to the situation where you have the right to access the entire database, even if you are lucky enough to attack the database, attackers are limited to accessing the information involved in a single transaction at best. Unauthorized users cannot access the real content server because the firewall allows only the proxy server to access the server.

3. workflow of Reverse Proxy Server

1) The user sends a request to access the Web server through the domain name. The domain name is resolved by the DNS server as the IP address of the reverse proxy server;

2) The Reverse Proxy Server accepts users' requests;

3) The Reverse Proxy Server searches for the requested content in the local cache and sends the content directly to the user;

4) if there is no information requested by the user in the local cache, the reverse proxy server will request the same information from the source server and send the information to the user, if the information is cached, it will be saved to the cache.

4. Benefits of reverse proxy

1) solved the problem of external visibility of the website server;

2) saves limited IP Address resources. All websites in the enterprise share an IP address registered on the Internet. These servers allocate private addresses and provide external services through virtual hosts;

3) protects the Real Web server. The web server is invisible to the external, and the reverse proxy server is visible on the Internet, but the reverse proxy server does not have real data. Therefore, ensures the security of web server resources;

4) It accelerates Website access and reduces the burden on the Web server. The reverse proxy can cache web pages. If the content you need is in the cache, it can be obtained directly from the proxy service, reducing the load on the Web server and accelerating user access.

5. An example of implementing Load Balancing using nginx as a reverse proxy

We introduced two main tasks that nginx, a lightweight high-performance server, can do:

Directly act as an HTTP server (instead of Apache, FastCGI processor is required for PHP, which we will introduce later );

Another function is to implement load balancing as a reverse proxy server (The following is an example to illustrate how to use nginx to achieve load balancing ).

Nginx is a common application because of its advantages in processing concurrency. Source: www.176book.com. Of course, the combined use of Apache mod_proxy and mod_cache can also achieve reverse proxy and load balancing for multiple app servers, but Apache is not good at concurrent processing.

1) Environment:

A. We use a local Windows system, and then use virutalbox to install a virtual Linux system. Install nginx (Listening to port 8080) and Apache (Listening to port 80) on the local Windows system respectively ). Install Apache on a virtual Linux system (Listening to port 80 ). In this way, we have one nginx server as the reverse proxy server at the front end, and two Apache servers as the application server (which can be considered as a small server cluster .; -));

B. nginx is used as the reverse proxy server. It is placed before two Apache servers and serves as the portal for user access. nginx only

Only static pages are processed. Dynamic Pages (PhP requests) are all delivered to two backend Apache servers for processing. That is to say, we can place static pages or files on our website to the nginx directory; dynamic pages and database access are retained to the Apache server in the background.

C. The following two methods are introduced to achieve Server Cluster load balancing.

Let's assume that the front-end nginx (for 127.0.0.1: 8080)contains a static page index.html;

Two backend Apache servers (localhost: 80 and 158.37.70.143: 80), one root directory with the phpMyAdmin folder and test. PHP (the test code is print "server1";), and the other root directory only contains one test. PHP (the test code is print "server2 ";).

2) Load Balancing for different requests:

A. when the reverse proxy is built in the simplest way (nginx only processes static content without dynamic content, and the dynamic content is handed over to the Apache server in the background), we set it to: In nginx. modify in conf: Location ~ /. Php $ {proxy_pass 158.37.70.143: 80 ;}

In this way, when the client accesses localhost: 8080/index.html, the front-end nginx will automatically respond;

When the user accesses localhost: 8080/test. php (this file is not found in the nginx directory at all), but the location ~ /. PHP $ (indicating that the regular expression matches. PHP end file, see how location is defined and matched http://wiki.nginx.org/nginxhttpcoremodule), nginx server will automatically pass to 158.37.70.143 Apache server. Test. PHP will be automatically parsed, And the HTML result page will be returned to nginx, and then nginx will be displayed (if nginx uses the memcached module or squid, it can also support caching ), the output result is printed server2.

The above is the simplest example of using nginx as the reverse proxy server;

B. Now we can extend the above example to support the two servers above.

We set the server module section of nginx. conf:

Location ^ ~ /PHPmyAdmin/{proxy_pass 127.0.0.1: 80;} location ~ /. Php $ {proxy_pass 158.37.70.143: 80 ;}

The first part above is location ^ ~ /PHPmyAdmin/, indicating that regular expressions are not used for matching (^ ~), It directly matches, that is, if the client accesses a URL starting with http: // localhost: 8080/PHPmyAdmin/(the phpMyAdmin directory is not in the local nginx directory ), nginx will automatically pass to the Apache server 127.0.0.1: 80, which parses the page under the phpMyAdmin directory, and then sends the result to nginx, which is displayed;

If the client access URL is http: // localhost/test. php, it will be passed to 158.37.70.143: 80

Apache.

Therefore, we have achieved Load Balancing for different requests.

If the user crashes the static page index.html, The frontend nginx responds directly;

If the user accesses the test. PHP page, Apache of 158.37.70.143: 80 responds;

If the user accesses the page under phpMyAdmin, Apache of 127.0.0.1: 80 responds;

3) Server Load balancer accessing the same page:

That is, you can access http: // localhost: 8080/test. when PHP is on the same page, we implement load balancing between the two servers (in actual situations, the data on the two servers must be synchronized and consistent, here we define the print server1 and server2 respectively to identify the difference ).

A. Now, in windows, nginx is localhost listening for port 8080;

Two Apache servers: 127.0.0.1: 80 (including the test. PHP page but printing server1) and 158.37.70.143: 80 (including the test. PHP page but printing server2 ).

B. Therefore, reconfigure nginx. conf:

First, in the nginx configuration file nginx. added to the HTTP module of conf. The server cluster of the server cluster (two servers here) is defined as upstream mycluster {server 127.0.0.1: 80; server 158.37.70.143: 80 ;} this indicates that the server cluster contains two servers> and is defined in the server module. Server Load balancer: Location ~ /. PHP $ {proxy_pass http: // mycluster; # The name here is the same as the name of the above cluster proxy_redirect off; proxy_set_header host $ host; proxy_set_header X-real-IP $ remote_addr; proxy_set_header X-forwarded-for $ proxy_add_x_forwarded_for;}, If you access http: // localhost: 8080/test. PHP page, the nginx directory does not have this file, but it will automatically pass it to the service cluster defined by mycluster, which consists of 127.0.0.1: 80; or 158.37.70.143: 80; for processing. When upstream is defined above, no weight is defined after each server, indicating a balance between the two. If you want more responses, for example, upstream mycluster {server 127.0.0.1: 80 Weight = 5;

Server 158.37.70.143: 80;} indicates a 5/6 probability to access the first server and a 1/6 probability to access the second server. You can also define parameters such as max_fails and fail_timeout.

 

In summary, we use the nginx reverse proxy server function to deploy it to the front ends of Multiple Apache servers.

Nginx is only used to process the proxy pass for Static Page responses and dynamic requests. Apache server in the background serves as the app server to process the dynamic pages passed by the front-end and return them to nginx.

With the above architecture, we can achieve Load Balancing for the cluster of nginx and Multiple Apache clusters. Two types of balancing:

1) You can define access to different contents in nginx and proxy to different backend servers. In the preceding example, access the phpMyAdmin Directory Proxy to the first server and access test. PHP proxy to the second server;

2) Access to the same page can be defined in nginx and evenly distributed to different backend servers. In the preceding example, accessing the test. PHP page will evenly proxy to server1 or server2.

In actual applications, the same app and data are retained on Server 1 and Server 2 respectively, so data synchronization between the two needs to be considered.

 

From: http://blog.csdn.net/xuqianghit/article/details/6576621

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.