Load Balancing (Nginx) Usage Tutorials

Source: Internet
Author: User
Tags tomcat server nginx load balancing

Nginx instance

//************ Description *************//

This article is based on Nginx load balancing for multiple tomcat servers

Preparation Work *************//

Nginx installation package Download: http://nginx.org/en/download.html

Nginx Online Brochure: http://shouce.jb51.net/nginx/index.html

tomcat1:8081 Port (Local installation start)

tomcat2:8082 Port (Local installation start)

tomcat3:8080 Port (LAN installation start)

Nginx Installation Start *************//

This article uses a more stable version of the nginx-1.11.1

①: Extract directly to the relevant path (this document is extracted under the e-disk)

②: There are two main ways to start, as follows:

1, directly double-click Nginx.exe Start (Note: Normal is a flash past)

650) this.width=650; "Src=" Http://s1.51cto.com/wyfs02/M02/83/2C/wKiom1dsljyCQPQIAABSVjJBHbE555.png-wh_500x0-wm_3 -wmp_4-s_812551516.png "title=" Qq20160624100832.png "alt=" Wkiom1dsljycqpqiaabsvjjbhbe555.png-wh_50 "/>

2. Use the command prompt to switch to the Nginx directory, enter start Nginx.exe or start nginx (Flash over)

650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M02/83/2C/wKiom1dsl33RAowYAAAT2vScftw843.png-wh_500x0-wm_3 -wmp_4-s_884836936.png "title=" Qq20160624101351.png "alt=" Wkiom1dsl33raowyaaat2vscftw843.png-wh_50 "/>

The above 2 methods are used to start the Nginx service, through process management to see if there are nginx processes (such as: Nginx.exe *32) or browser input localhost to see if the page is as follows:

650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M02/83/2D/wKiom1dsmQmSWg--AAAvDG9Qrg4084.png-wh_500x0-wm_3 -wmp_4-s_1503533017.png "title=" Qq20160624102040.png "alt=" Wkiom1dsmqmswg--aaavdg9qrg4084.png-wh_50 "/>

If the browser appears as above, it indicates that Nginx started successfully.

Nginx Common Command *************//

Start Nginx: Start Nginx Service

Nginx-s stop: Deactivate or end Nginx service

Nginx-s Reload: Restart Nginx service, typically used for configuration file modification

Tomcat Service Readiness *************//

650) this.width=650; "Src=" Http://s3.51cto.com/wyfs02/M00/83/2C/wKioL1dsm-WBx5pCAADeVqGx448335.png-wh_500x0-wm_3 -wmp_4-s_3751741556.png "title=" Qq20160624103254.png "alt=" Wkiol1dsm-wbx5pcaadevqgx448335.png-wh_50 "/>

3 Tomcat server started, (2 units, LAN 1)

Nginx Configuration file Modification *************//

nginx.conf (path: E:\nginx-1.11.1\conf\nginx.conf)

########### #负载均衡配置 ########### upstream localhost {server 192.168.1.103:8080;    Server 192.168.1.103:8081;     Server 192.168.1.154:8080;        } location/{root HTML;        Index index.html index.htm;  Proxy_pass http://localhost; #localhost与负载均衡upstream配置名称localhost consistent}

The configuration of the load balancer can be completed by configuring upstream, Proxy_pass, before the entire file is modified.

Then command prompt execution Nginx-s reload, in the browser input proxy_pass configured Address http://localhost/view page, constantly refresh the page to see if the server switch, the effect is as follows:

650) this.width=650; "Src=" Http://s2.51cto.com/wyfs02/M01/83/2D/wKioL1dspijSG4X3AAAvRPKzQQQ382.png-wh_500x0-wm_3 -wmp_4-s_3919226527.png "style=" Float:none; "title=" Qq20160624111333.png "alt=" Wkiol1dspijsg4x3aaavrpkzqqq382.png-wh_50 "/>

Server with port: 8080

650) this.width=650; "Src=" Http://s2.51cto.com/wyfs02/M01/83/2E/wKiom1dspinQaU39AAJSKT37mlI509.png-wh_500x0-wm_3 -wmp_4-s_2511113347.png "style=" Float:none; "title=" Qq20160624111508.png "alt=" Wkiom1dspinqau39aajskt37mli509.png-wh_50 "/>

Server with native port: 8081

650) this.width=650; "Src=" Http://s2.51cto.com/wyfs02/M02/83/2D/wKioL1dspinTisXZAABpULlEDso497.png-wh_500x0-wm_3 -wmp_4-s_3820882735.png "style=" Float:none; "title=" Qq20160624111622.png "alt=" Wkiol1dspintisxzaabpulledso497.png-wh_50 "/>

LAN Port: 8080 server

4 configuration examples of Nginx load Balancing *************//

1. Polling

Polling, round Robin, distributes the client's Web requests to different back-end servers in turn, based on the order in the Nginx configuration file.

http{upstream localhost{server 192.168.1.103:8080;   Server 192.168.1.103:8081;  Server 192.168.1.154:8080;    } ..... server{listen 80;    ... location/{Proxy_pass http://localhost; }   }

Only 1 DNS entries above are inserted into the upstream section, or SampleApp, and are again mentioned in the Proxy_pass section later.

2. Minimum connection

Web requests are forwarded to the server with the fewest number of connections.
Examples of configuration are as follows:

http{upstream localhost{Least_conn;        Server 192.168.1.103:8080;    Server 192.168.1.103:8081;    Server 192.168.1.154:8080;     } ..... server{listen 80;     ... location/{Proxy_pass http://localhost; }    }

The above example simply adds the Least_conn configuration to the upstream section. Other configurations are configured with polling.

3. IP Address Hash

In both of the preceding load balancing scenarios, a continuous Web request from the same client may be distributed to a different backend server for processing, so the session is more complex if it involves session sessions. It is common for database-based session persistence. To overcome the above challenges, you can use a load balancing scheme based on IP address hashing. In this case, successive Web requests from the same client will be distributed to the same server for processing.
Examples of configuration are as follows:

http{upstream localhost{Ip_hash;        Server 192.168.1.103:8080;    Server 192.168.1.103:8081;   Server 192.168.1.154:8080;     } ..... server{listen 80;     ... location/{Proxy_pass http://localhost; }    }

The above example simply adds the Ip_hash configuration to the upstream section. Other configurations are configured with polling.

4, weight-based load balancing

Weight-based load balancing is weighted load balancing, in which case we can configure Nginx to distribute requests more to high-provisioned back-end servers and distribute relatively few requests to low-provisioning servers.
Examples of configuration are as follows:

http{upstream localhost{server 192.168.1.103:8080 weight=2;    Server 192.168.1.103:8081 weight=5;    Server 192.168.1.154:8080;     } ..... server{listen 80;     ... location/{Proxy_pass http://localhost; }  }

The above example is configured after the server address and Port weight=2, which means that every 8 requests are received, the first 2 requests are distributed to the second server, the 第3-7个 requests are distributed to the secondary server, the 8th request is distributed to the third server, and the other configuration is the same as the polling configuration.

Also, it is important to note that weight-based load balancing and load balancing based on IP address hashes can be combined.



Note: The article has many shortcomings, only for reference!

This article is from the "Siege Lion Tour" blog, please be sure to keep this source http://samuele.blog.51cto.com/11764688/1792502

Load Balancing (Nginx) Usage Tutorials

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.