CentOS 6.4 Deployment of Nginx reverse proxy, load balancer

Source: Internet
Author: User
Tags redmine nginx reverse proxy nginx load balancing



A: Preface



Nginx is a high-performance Web server that supports reverse proxy, load balancing, page caching, URL rewriting, and read/write separation.






II: Environmental Preparedness



1. Operating system



CentOS 6.4 x86_64


 
[[email protected] logs]# cat /proc/version 
Linux version 2.6.32-358.el6.x86_64 ([email protected]) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Fri Feb 22 00:31:26 UTC 2013


2. Software version



Nginx 1. Version 2


[[email protected] conf]# /data1/app/services/nginx/sbin/nginx -V
nginx version: nginx/1.2.8 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) 
TLS SNI support enabled
configure arguments: --user=www --group=www --prefix=/data1/app/services/nginx --with-poll_module --with-pcre --with-http_stub_status_module --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module --with-http_image_filter_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module
[[email protected] conf]# 


3, Nginx Installation



Note: Here I am using the company RPM package a key package, where the address is not convenient to disclose. Nginx official website www.nginx.org official website download installs compiles directly.



4. Experiment Machine



172.17.0.43 web.proxy.com



172.17.17.17 web01.test.com



172.17.92.132 web02.test.com

5. Time synchronization



Note: It is necessary to synchronize the time of 3 machines. I set it manually.


 
yum install ntp -y
netdate 202.102.2.101


6. Close the firewall &selinux



Note: All three machines are in the same operation.


[[email protected] conf]# /etc/init.d/iptables stop
[[email protected] conf]# setenforce 0 setenforce: SELinux is disabled





Three: Nginx Reverse proxy



Here I am a key rpm installed, the path is packaged defined. The installation path is


/data1/app/services/nginx


We need to prepare 2 Web servers for the reverse proxy.



1, WEB01 installation httpd


Yum Install httpd-y


2, WEB02 installation httpd


Yum Install httpd-y


3, provide testing page, I here is the test of PHP, one for the DZ forum for a PHP test page. Of course you can also use static, as follows:


 
[[email protected]~] echo "<h1>web01.test.com</h1>" > /var/www/html/index.html
[[email protected]~] echo "<h1>web02.test.com</h1>" > /var/www/html/index.html


4, start the 2 Apache test machine httpd Service


[[Email protected] ~~]# Service httpd start


5. Test it, please.



Above I said my web01 is dz,web02 is phpinfo



Visit Web01 as follows:






Visit web02 as follows:









Here the premise environment is no problem, simply say the direction of the agent and the forward proxy.



forward Proxy: that is the legendary agent, he works like a springboard, in short, I am a user, I can not access a site, but I have access to a proxy server,



This proxy server can directly access to the site I want to access, so I first connect to the proxy server, and then tell the proxy server I want to visit the website, proxy server to me to take the content I want, and then generate cache forwarded to me, from the perspective of the site, Only when the proxy server to fetch the content of the time of the record, and sometimes do not know the user's request, but also hide the user's information.



The conclusion is that the forward proxy is a server that is located between the client and the originating server (Origin server).
In order to get content from the original server, the client sends a request to the agent and specifies the target (the original server).
The agent then forwards the request to the original server and returns the obtained content to the client.
The client must make some special settings to use the forward proxy.



Reverse Proxy: For example, user wants to access Www.test.com/redmine, But www.test.com there is no Redmine page, he is secretly retrieved from another server, and then as his own content to return to the user, but here the user does not know, here is an example of reverse proxy.



The conclusion is that the reverse proxy is just the opposite, it is like the original server for the client, and the client does not need to make any special settings. The client sends a normal request to the content in the reverse proxy's namespace (name-space), and then the reverse proxy determines where (the originating server) forwards the request and returns the obtained content to the client, as if the content had been its own.



(3). difference between the two



From the use to say:



A typical use of a forward proxy is to provide access to the Internet for LAN clients within the firewall. Forward proxies can also use buffering features to reduce network usage. A typical use of a reverse proxy is to provide the server behind the firewall to Internet users for access. The reverse proxy can also provide load balancing for multiple servers in the backend, or provide buffering services for servers with slower back-end. In addition, the reverse proxy can enable advanced URL policies and management techniques so that Web pages that are in different Web server systems exist simultaneously in the same URL space



From a security standpoint:



A forward proxy allows a client to access any Web site through it and hides the client itself, so you must take security measures to ensure that only authorized clients are serviced. The reverse proxy is transparent to the outside, and the visitor does not know that he is visiting an agent.



6.nginx Proxy Module



Description: Proxy module Instructions There are a lot of me here only to explain the important proxy_pass, want to know more proxy instructions please refer to the official Chinese document.



This module can forward requests to other servers. http/1.0 cannot use KeepAlive (the back-end server will create and delete connections for each request). Nginx sends http/1.1 to the browser and sends http/1.0 to the backend server so that the browser can handle keepalive for the browser.


7. Configuring the HTTP Reverse proxy
cd /data1/app/services/nginx/conf cp nginx.conf nginx.conf.bak20160708
vim nginx.conf
location / {
    proxy_pass      http://172.17.17.17:80; }


Instruction Description: Proxy_pass



syntax : Proxy_pass URL
Default value : No
using fields : location, if field in location
This instruction sets the address of the proxy server and the URI to be mapped, and the address can be in the form of a host name or IP plus port number.



For example: Proxy_pass http://localhost:8000/uri/;



8. Reload the configuration file


[[Email protected] conf]# service Nginx Reload


9, test, when we access the proxy server IP, then jumped to the 172.17.17.17









Five, Nginx load balancing



Note, you can see, because our site is the early stage of development, nginx Agent only a backend server, but because of our website fame rose to visit more and more people a server is not up, so we added more servers,



So many servers and how to configure the agent, we here to two servers as a case, for everyone to do a demonstration.



1.upstream Load Balancer Module description



Case:



The following sets the list of servers for load balancing





 
unstream webserver {
ip_hash;
server 172.17.17.17:80;
server 172.17.17.18:80 down;
server 172.17.17.19:8009 max_fails=3 fail_timeout=30s;
server 172.17.17:20:8080;
}

server {
    location / {
   proxy_pass http://webserver   }
}


Upstream is the Nginx HTTP upstream module, which uses a simple scheduling algorithm to load balance client IP to back-end servers. In the above settings, a load balancer name test.net is specified by the upstream directive. The name can be arbitrarily specified, and is called directly at the point where it needs to be used later.



2.upstream Supported load Balancing algorithm



2.upstream Supported load Balancing algorithm



Nginx Load Balancer module currently supports 4 scheduling algorithms, the following are described separately, of which the latter two are third-party scheduling algorithms.


    • Polling (default). Each request is individually assigned to a different back-end server in chronological order, and if a server on the backend is down, the fault system is automatically rejected, leaving the user's access unaffected. Weight Specifies a polling weight value, the higher the Weight value, the higher the access probability assigned to it, which is primarily used in the case of uneven performance per server in the backend.

    • Ip_hash. Each request is allocated according to the hash result of the access IP, so that visitors from the same IP have fixed access to a back-end server, which effectively solves the session sharing problem of dynamic Web pages.

    • Fair This is a more intelligent load balancing algorithm than the two above. This algorithm can intelligently load balance according to the page size and the load time, that is, the response time of the backend server to allocate the request, the response time is short priority allocation. Nginx itself is not support fair, if you need to use this scheduling algorithm, you must download the Nginx Upstream_fair module.

    • Url_hash. This method allocates requests according to the hash result of the access URL, directing each URL to the same back-end server, which can further improve the efficiency of the backend cache server. Nginx itself is not support url_hash, if you need to use this scheduling algorithm, you must install Nginx hash package.


3.upstream Supported status parameters



In the HTTP upstream module, you can specify the IP address and port of the back-end server through the server directives, and you can also set the state of each back-end server in the load-balancing schedule. The commonly used statuses are:


    • Down, which indicates that the current server is temporarily not participating in load balancing.

    • Backup, reserved for the standby machine. When all other non-backup machines fail or are busy, the backup machine is requested, so the pressure on this machine is the lightest.

    • Max_fails, the number of times a request failed to be allowed, by default, 1. Returns the error defined by the Proxy_next_upstream module when the maximum number of times is exceeded.

    • Fail_timeout, the time to suspend service after a max_fails failure. Max_fails can be used with fail_timeout.


Note that when the load scheduling algorithm is Ip_hash, the state of the backend server in load balancing scheduling cannot be weight and backup.






4. Configure Nginx Load


[[email protected] conf]# pwd /data1/app/services/nginx/conf
[[email protected] conf]# vim nginx.conf

upstream webserver {
     ip_hash;
     server 172.17.17.17 weight=1 max_fails=2 fail_timeout=2;
     server 172.17.92.132 weight=1 max_fails=2 fail_timeout=2;
   #  server 127.0.0.1:8080 backup;
    }

    server {
        listen 80;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
            root   html;
            index  index.html index.php index.htm;
            proxy_pass http://webserver; proxy_set_header X-Real-IP $remote_addr;


        }


5. Reload the configuration file.


/data1/appservices/nginx/sbin/nginx-s Reload


6. Testing









But everyone think about it, if unfortunately all the servers can not provide services, how to do, the user opens the page will appear error page, then will bring the user experience is reduced, so we can be like configuration LVS is configured Sorry_server, the answer is yes, but this is not configured Sorry_ Server instead, configure Backup.






7. Configure the backup server.


  #gzip on;
server {
    listen 8080;
    server_name localhost;
    root /data1/www/errorpage;
    index index.html;
       }

upstream webserver {
     #ip_hash;
     server 172.17.17.17 weight=1 max_fails=2 fail_timeout=2;
     server 172.17.92.132 weight=1 max_fails=2 fail_timeout=2;
     server 127.0.0.1:8080 backup;
    }

    server {
        listen 80;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
            root   html;
            index  index.html index.php index.htm;
            proxy_pass http://webserver; proxy_set_header X-Real-IP $remote_addr;


        }
 
mkdir -pv /data1/www/errorpage echo "Sorry....." > /data1/www/errorpage/index.html


8. Configure Ip_hash Load Balancing


 
server {
    listen 8080;
    server_name localhost;
    root /data1/www/errorpage;
    index index.html;
       }

upstream webserver {
     ip_hash;
     server 172.17.17.17 weight=1 max_fails=2 fail_timeout=2;
     server 172.17.92.132 weight=1 max_fails=2 fail_timeout=2;
     #server 127.0.0.1:8080 backup;
    }

    server {
        listen 80;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
            root   html;
            index  index.html index.php index.htm;
            proxy_pass http://webserver; proxy_set_header X-Real-IP $remote_addr;


        }
    • Ip_hash, each request is allocated according to the hash result of the access IP, so that visitors from the same IP have fixed access to a back-end server, which effectively solves the session sharing problem of dynamic Web pages. (General e-commerce website used more)

    • Note that when the load scheduling algorithm is Ip_hash, the state of the backend server in load balancing scheduling cannot have backup. (one might ask, why?) Everybody wants to, if load balancer assigns you to the backup server, can you access the page? No, so the backup server cannot be configured)
    • Test.
    • Number of access connections for statistics WEB2
    •  
      
      [[email protected] logs]# netstat -antp | grep 80 | wc -l
      62
      [[email protected] logs]# netstat -antp | grep 80 | wc -l
      172



CentOS 6.4 Deployment of Nginx reverse proxy, load balancer


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.