Nginx reverse proxy for load Balancing configuration diagram

Source: Internet
Author: User
Tags nginx server nginx reverse proxy

Nginx reverse proxy for load Balancing configuration diagram

[Introduction] Load Balancing configuration is a very large machine needs to consider some of the problems, but also a data security approach, let me introduce the reverse proxy load Balancer configuration diagram in Nginx, you can refer to this article to operate. First, the following nginx is introduced as a reverse proxy to achieve load balancing. Reverse

Load Balancing configuration is a very large machine needs to consider some of the problems, but also a data security approach, let me introduce the reverse proxy load Balancer configuration diagram in Nginx, you can refer to this article to operate. First, the following nginx is introduced as a reverse proxy to achieve load balancing. Reverse proxy mode refers to the reverse proxy server to accept the connection request on the Internet, and then forward the request to the server on the internal network, and the results obtained from the server to the Internet to request connections on the client, when the reverse proxy server is displayed as a server. Allow hosts on the Internet to access different intranet host resources through different domain names, so that intranet hosts are protected from external host attacks, load balancing and caching functions, to a large extent, reduce the burden on Web servers? Improve access speed. A simple nginx is implemented as a reverse proxy for load balancing. As shown, I simply built an environment to 3 computers (Windows system, here is simple to build so with Windows, recommended or Linux system). The environment is described as follows: 1. Nginx server 192.168.2.3, install Nginx as a reverse proxy server (80 port). 2.1 Computers installed nginx+php 192.168.2.2 (80 ports), as a Web server 13.1 pcs installed apache+php 192.168.2.8 (80 port), as a Web server 2 (one) for different requests load balancing. Nginx only handles static pages, and dynamic pages (PHP requests) are all delivered to the backend Apache for processing. In other words, the static pages or files of our website can be placed in the Nginx directory, and the dynamic pages and database access are reserved to the Apache server in the background. Here is an example to illustrate, respectively, to visit the HTML and PHP page, test.html in the Nginx directory, test.php in the Apache directory. Modify the default nginx.conf, probably in the 59~61 line, remove the previous # number, restart Nginx.
#location ~. php$ {#    proxy_pass   http://127.0.0.1;#} changed to location        ~. php$ {            proxy_pass/   http 127.0.0.1:8080;}

Separate access, which appears to have been able to access the server for different requests

So when we visit the 192.168.2.3/index.html, the front-end Nginx will automatically respond, when access to 192.168.2.3/test.php (at this time there is no such file in the Nginx directory), But through the above settings location ~. php$ (indicates that regular expressions match files ending in. php, see details of how location is defined and matched, official documents http://wiki.nginx.org/ Nginxhttpcoremodule), the Nginx server will automatically pass to the 192.168.2.3 Apache server.  The test.php under this server will be automatically parsed, then the results page of test.php will be returned to Nginx, then the nginx display. Example of implementing multiple server load balancing for different requests: Access static page test.html, the most front-end nginx directly response, access to PHP page test.php,192.168.2.3:8080 Apache to respond;
To access the page under directory phpMyAdmin, 192.168.2.2:80 Apache responds
Modify the Server Module section of the original default nginx.conf (probably on the 59~61 line):
#location ~. php$ {#    proxy_pass   http://127.0.0.1;#} modified to location        ^~/phpmyadmin/{             Proxy_pass 192.168.2.2:80;}        Location ~. php$ {              proxy_pass 192.168.2.3:8080;}
The first section above location ^~/phpmyadmin/, which means that no regular expression matching (^~) is used, but rather a direct match, that is, if the URL that the client accesses is the beginning of the http://192.168.2.3/phpMyAdmin/(  The local Nginx directory does not have a phpmyadmin directory at all, Nginx will automatically pass to the 192.168.2.2:80 Apache server, the server to the phpMyAdmin directory of the page to parse, and then send the results to nginx display. (ii) Load balanced access to the same page http://192.168.2.3/test.php this same page, we implement load balancing for three servers (in practice, the data on both servers requires synchronization).
Reconfigure the nginx.conf, using the default nginx.conf. 1. First remove some configuration under Server, probably 36~46 line, the following configuration line is deleted.
Listen       ;        server_name  localhost; #charset koi8-r; #access_log  logs/host.access.log  main;        Location/{            root   html;            Index  index.html index.htm;}
2. Add the definition of server Cluster Server cluster in the HTTP module of the configuration file nginx.conf.
Upstream Mycluster {        server 192.168.2.3:8080;        Server 192.168.2.2:80;        Server 192.168.2.8:80;}
Indicates that this server cluster contains 3 servers 3. Then define load balancing in the server module
Location ~. php$ {        proxy_pass http://myCluster;         Proxy_redirect off;         Proxy_set_header Host $host;        Proxy_set_header x-real-ip $remote _addr;        Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;}
Proxy_pass Http://myCluster; The name here is the same as the name above cluster, when accessing the http://192.168.2.3/test.php page, the Nginx directory does not have the file at all, but it will automatically pass it to the Mycluster defined server farm.  Processed by one of the 3 servers mentioned above. Above the definition of upstream, each server does not define a weight, which means that the two are balanced; if you want some more response, you can add weight
Upstream Mycluster {        server 192.168.2.3:8080 weight=5;        Server 192.168.2.2:80;        Server 192.168.2.8:80;}
This represents a 5/7 chance to access the first SERVER,1/7 access to the second and third. In addition, parameters such as Max_fails and fail_timeout can also be defined. So we use the Nginx reverse proxy server reverse the function of proxy server, it is arranged to the front of multiple Apache server. Nginx is only used to handle static page response and dynamic request of the proxy pass, the Apache server in the background to the foreground pass over the dynamic page processing and return to Nginx. In the actual application, each server retains the same program and data, need to consider the data synchronization.

Nginx reverse proxy for load Balancing configuration diagram

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.